00:00:00.001 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v22.11" build number 208 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3710 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.121 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.122 The recommended git tool is: git 00:00:00.122 using credential 00000000-0000-0000-0000-000000000002 00:00:00.124 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.182 Fetching changes from the remote Git repository 00:00:00.186 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.243 Using shallow fetch with depth 1 00:00:00.243 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.243 > git --version # timeout=10 00:00:00.286 > git --version # 'git version 2.39.2' 00:00:00.286 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.313 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.313 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.377 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.388 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.400 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.400 > git config core.sparsecheckout # timeout=10 00:00:07.412 > git read-tree -mu HEAD # timeout=10 00:00:07.428 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.452 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.452 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.548 [Pipeline] Start of Pipeline 00:00:07.558 [Pipeline] library 00:00:07.559 Loading library shm_lib@master 00:00:07.559 Library shm_lib@master is cached. Copying from home. 00:00:07.572 [Pipeline] node 00:00:07.583 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:07.584 [Pipeline] { 00:00:07.591 [Pipeline] catchError 00:00:07.592 [Pipeline] { 00:00:07.602 [Pipeline] wrap 00:00:07.611 [Pipeline] { 00:00:07.619 [Pipeline] stage 00:00:07.621 [Pipeline] { (Prologue) 00:00:07.638 [Pipeline] echo 00:00:07.639 Node: VM-host-SM17 00:00:07.646 [Pipeline] cleanWs 00:00:07.656 [WS-CLEANUP] Deleting project workspace... 00:00:07.656 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.662 [WS-CLEANUP] done 00:00:07.916 [Pipeline] setCustomBuildProperty 00:00:07.971 [Pipeline] httpRequest 00:00:10.167 [Pipeline] echo 00:00:10.168 Sorcerer 10.211.164.101 is alive 00:00:10.177 [Pipeline] retry 00:00:10.178 [Pipeline] { 00:00:10.190 [Pipeline] httpRequest 00:00:10.195 HttpMethod: GET 00:00:10.195 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.196 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.213 Response Code: HTTP/1.1 200 OK 00:00:10.214 Success: Status code 200 is in the accepted range: 200,404 00:00:10.215 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:31.787 [Pipeline] } 00:00:31.804 [Pipeline] // retry 00:00:31.818 [Pipeline] sh 00:00:32.100 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:32.117 [Pipeline] httpRequest 00:00:33.030 [Pipeline] echo 00:00:33.032 Sorcerer 10.211.164.101 is alive 00:00:33.041 [Pipeline] retry 00:00:33.043 [Pipeline] { 00:00:33.056 [Pipeline] httpRequest 00:00:33.062 HttpMethod: GET 00:00:33.062 URL: http://10.211.164.101/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:33.063 Sending request to url: http://10.211.164.101/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:33.080 Response Code: HTTP/1.1 200 OK 00:00:33.081 Success: Status code 200 is in the accepted range: 200,404 00:00:33.081 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:04.393 [Pipeline] } 00:01:04.411 [Pipeline] // retry 00:01:04.419 [Pipeline] sh 00:01:04.700 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:07.253 [Pipeline] sh 00:01:07.535 + git -C spdk log --oneline -n5 00:01:07.535 b18e1bd62 version: v24.09.1-pre 00:01:07.535 19524ad45 version: v24.09 00:01:07.535 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:01:07.535 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:01:07.535 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:01:07.554 [Pipeline] withCredentials 00:01:07.564 > git --version # timeout=10 00:01:07.577 > git --version # 'git version 2.39.2' 00:01:07.591 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:07.593 [Pipeline] { 00:01:07.601 [Pipeline] retry 00:01:07.603 [Pipeline] { 00:01:07.617 [Pipeline] sh 00:01:07.896 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:08.169 [Pipeline] } 00:01:08.190 [Pipeline] // retry 00:01:08.195 [Pipeline] } 00:01:08.212 [Pipeline] // withCredentials 00:01:08.222 [Pipeline] httpRequest 00:01:08.611 [Pipeline] echo 00:01:08.613 Sorcerer 10.211.164.101 is alive 00:01:08.624 [Pipeline] retry 00:01:08.627 [Pipeline] { 00:01:08.641 [Pipeline] httpRequest 00:01:08.646 HttpMethod: GET 00:01:08.647 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:08.647 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:08.657 Response Code: HTTP/1.1 200 OK 00:01:08.658 Success: Status code 200 is in the accepted range: 200,404 00:01:08.658 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:30.391 [Pipeline] } 00:01:30.408 [Pipeline] // retry 00:01:30.415 [Pipeline] sh 00:01:30.696 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:32.084 [Pipeline] sh 00:01:32.362 + git -C dpdk log --oneline -n5 00:01:32.362 caf0f5d395 version: 22.11.4 00:01:32.362 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:32.362 dc9c799c7d vhost: fix missing spinlock unlock 00:01:32.362 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:32.362 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:32.380 [Pipeline] writeFile 00:01:32.395 [Pipeline] sh 00:01:32.675 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:32.687 [Pipeline] sh 00:01:32.969 + cat autorun-spdk.conf 00:01:32.969 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.969 SPDK_TEST_NVMF=1 00:01:32.969 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.969 SPDK_TEST_URING=1 00:01:32.969 SPDK_TEST_USDT=1 00:01:32.969 SPDK_RUN_UBSAN=1 00:01:32.969 NET_TYPE=virt 00:01:32.969 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:32.969 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:32.969 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:32.975 RUN_NIGHTLY=1 00:01:32.977 [Pipeline] } 00:01:32.991 [Pipeline] // stage 00:01:33.008 [Pipeline] stage 00:01:33.010 [Pipeline] { (Run VM) 00:01:33.023 [Pipeline] sh 00:01:33.303 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:33.303 + echo 'Start stage prepare_nvme.sh' 00:01:33.303 Start stage prepare_nvme.sh 00:01:33.303 + [[ -n 6 ]] 00:01:33.303 + disk_prefix=ex6 00:01:33.303 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:33.303 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:33.303 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:33.303 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.303 ++ SPDK_TEST_NVMF=1 00:01:33.303 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:33.303 ++ SPDK_TEST_URING=1 00:01:33.303 ++ SPDK_TEST_USDT=1 00:01:33.303 ++ SPDK_RUN_UBSAN=1 00:01:33.303 ++ NET_TYPE=virt 00:01:33.303 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:33.303 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:33.303 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:33.303 ++ RUN_NIGHTLY=1 00:01:33.303 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:33.303 + nvme_files=() 00:01:33.303 + declare -A nvme_files 00:01:33.303 + backend_dir=/var/lib/libvirt/images/backends 00:01:33.303 + nvme_files['nvme.img']=5G 00:01:33.303 + nvme_files['nvme-cmb.img']=5G 00:01:33.304 + nvme_files['nvme-multi0.img']=4G 00:01:33.304 + nvme_files['nvme-multi1.img']=4G 00:01:33.304 + nvme_files['nvme-multi2.img']=4G 00:01:33.304 + nvme_files['nvme-openstack.img']=8G 00:01:33.304 + nvme_files['nvme-zns.img']=5G 00:01:33.304 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:33.304 + (( SPDK_TEST_FTL == 1 )) 00:01:33.304 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:33.304 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:33.304 + for nvme in "${!nvme_files[@]}" 00:01:33.304 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:33.304 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:33.304 + for nvme in "${!nvme_files[@]}" 00:01:33.304 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:33.304 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:33.304 + for nvme in "${!nvme_files[@]}" 00:01:33.304 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:33.304 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:33.304 + for nvme in "${!nvme_files[@]}" 00:01:33.304 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:33.304 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:33.304 + for nvme in "${!nvme_files[@]}" 00:01:33.304 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:33.304 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:33.304 + for nvme in "${!nvme_files[@]}" 00:01:33.304 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:33.304 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:33.304 + for nvme in "${!nvme_files[@]}" 00:01:33.304 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:33.562 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:33.562 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:33.562 + echo 'End stage prepare_nvme.sh' 00:01:33.562 End stage prepare_nvme.sh 00:01:33.573 [Pipeline] sh 00:01:33.854 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:33.854 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:01:33.854 00:01:33.854 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:33.854 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:33.854 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:33.854 HELP=0 00:01:33.854 DRY_RUN=0 00:01:33.854 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:01:33.854 NVME_DISKS_TYPE=nvme,nvme, 00:01:33.854 NVME_AUTO_CREATE=0 00:01:33.854 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:01:33.854 NVME_CMB=,, 00:01:33.854 NVME_PMR=,, 00:01:33.854 NVME_ZNS=,, 00:01:33.854 NVME_MS=,, 00:01:33.854 NVME_FDP=,, 00:01:33.854 SPDK_VAGRANT_DISTRO=fedora39 00:01:33.854 SPDK_VAGRANT_VMCPU=10 00:01:33.854 SPDK_VAGRANT_VMRAM=12288 00:01:33.854 SPDK_VAGRANT_PROVIDER=libvirt 00:01:33.854 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:33.854 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:33.854 SPDK_OPENSTACK_NETWORK=0 00:01:33.854 VAGRANT_PACKAGE_BOX=0 00:01:33.854 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:33.854 FORCE_DISTRO=true 00:01:33.854 VAGRANT_BOX_VERSION= 00:01:33.854 EXTRA_VAGRANTFILES= 00:01:33.854 NIC_MODEL=e1000 00:01:33.854 00:01:33.854 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:33.854 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:36.449 Bringing machine 'default' up with 'libvirt' provider... 00:01:37.015 ==> default: Creating image (snapshot of base box volume). 00:01:37.272 ==> default: Creating domain with the following settings... 00:01:37.272 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733681934_a983feddf694350ea22d 00:01:37.272 ==> default: -- Domain type: kvm 00:01:37.272 ==> default: -- Cpus: 10 00:01:37.272 ==> default: -- Feature: acpi 00:01:37.272 ==> default: -- Feature: apic 00:01:37.273 ==> default: -- Feature: pae 00:01:37.273 ==> default: -- Memory: 12288M 00:01:37.273 ==> default: -- Memory Backing: hugepages: 00:01:37.273 ==> default: -- Management MAC: 00:01:37.273 ==> default: -- Loader: 00:01:37.273 ==> default: -- Nvram: 00:01:37.273 ==> default: -- Base box: spdk/fedora39 00:01:37.273 ==> default: -- Storage pool: default 00:01:37.273 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733681934_a983feddf694350ea22d.img (20G) 00:01:37.273 ==> default: -- Volume Cache: default 00:01:37.273 ==> default: -- Kernel: 00:01:37.273 ==> default: -- Initrd: 00:01:37.273 ==> default: -- Graphics Type: vnc 00:01:37.273 ==> default: -- Graphics Port: -1 00:01:37.273 ==> default: -- Graphics IP: 127.0.0.1 00:01:37.273 ==> default: -- Graphics Password: Not defined 00:01:37.273 ==> default: -- Video Type: cirrus 00:01:37.273 ==> default: -- Video VRAM: 9216 00:01:37.273 ==> default: -- Sound Type: 00:01:37.273 ==> default: -- Keymap: en-us 00:01:37.273 ==> default: -- TPM Path: 00:01:37.273 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:37.273 ==> default: -- Command line args: 00:01:37.273 ==> default: -> value=-device, 00:01:37.273 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:37.273 ==> default: -> value=-drive, 00:01:37.273 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:01:37.273 ==> default: -> value=-device, 00:01:37.273 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:37.273 ==> default: -> value=-device, 00:01:37.273 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:37.273 ==> default: -> value=-drive, 00:01:37.273 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:37.273 ==> default: -> value=-device, 00:01:37.273 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:37.273 ==> default: -> value=-drive, 00:01:37.273 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:37.273 ==> default: -> value=-device, 00:01:37.273 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:37.273 ==> default: -> value=-drive, 00:01:37.273 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:37.273 ==> default: -> value=-device, 00:01:37.273 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:37.273 ==> default: Creating shared folders metadata... 00:01:37.273 ==> default: Starting domain. 00:01:38.644 ==> default: Waiting for domain to get an IP address... 00:01:56.741 ==> default: Waiting for SSH to become available... 00:01:56.741 ==> default: Configuring and enabling network interfaces... 00:01:59.280 default: SSH address: 192.168.121.24:22 00:01:59.280 default: SSH username: vagrant 00:01:59.280 default: SSH auth method: private key 00:02:01.845 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:08.423 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:14.988 ==> default: Mounting SSHFS shared folder... 00:02:15.924 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:15.924 ==> default: Checking Mount.. 00:02:17.361 ==> default: Folder Successfully Mounted! 00:02:17.361 ==> default: Running provisioner: file... 00:02:17.935 default: ~/.gitconfig => .gitconfig 00:02:18.502 00:02:18.502 SUCCESS! 00:02:18.502 00:02:18.502 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:18.502 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:18.502 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:18.502 00:02:18.510 [Pipeline] } 00:02:18.522 [Pipeline] // stage 00:02:18.529 [Pipeline] dir 00:02:18.530 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:18.531 [Pipeline] { 00:02:18.541 [Pipeline] catchError 00:02:18.543 [Pipeline] { 00:02:18.554 [Pipeline] sh 00:02:18.831 + vagrant ssh-config --host vagrant 00:02:18.831 + sed -ne /^Host/,$p 00:02:18.831 + tee ssh_conf 00:02:22.114 Host vagrant 00:02:22.114 HostName 192.168.121.24 00:02:22.114 User vagrant 00:02:22.114 Port 22 00:02:22.114 UserKnownHostsFile /dev/null 00:02:22.114 StrictHostKeyChecking no 00:02:22.114 PasswordAuthentication no 00:02:22.114 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:22.114 IdentitiesOnly yes 00:02:22.114 LogLevel FATAL 00:02:22.114 ForwardAgent yes 00:02:22.114 ForwardX11 yes 00:02:22.114 00:02:22.126 [Pipeline] withEnv 00:02:22.128 [Pipeline] { 00:02:22.143 [Pipeline] sh 00:02:22.421 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:22.421 source /etc/os-release 00:02:22.421 [[ -e /image.version ]] && img=$(< /image.version) 00:02:22.421 # Minimal, systemd-like check. 00:02:22.421 if [[ -e /.dockerenv ]]; then 00:02:22.421 # Clear garbage from the node's name: 00:02:22.421 # agt-er_autotest_547-896 -> autotest_547-896 00:02:22.421 # $HOSTNAME is the actual container id 00:02:22.421 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:22.421 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:22.421 # We can assume this is a mount from a host where container is running, 00:02:22.421 # so fetch its hostname to easily identify the target swarm worker. 00:02:22.421 container="$(< /etc/hostname) ($agent)" 00:02:22.421 else 00:02:22.421 # Fallback 00:02:22.421 container=$agent 00:02:22.421 fi 00:02:22.421 fi 00:02:22.421 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:22.421 00:02:22.690 [Pipeline] } 00:02:22.706 [Pipeline] // withEnv 00:02:22.715 [Pipeline] setCustomBuildProperty 00:02:22.729 [Pipeline] stage 00:02:22.731 [Pipeline] { (Tests) 00:02:22.749 [Pipeline] sh 00:02:23.027 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:23.041 [Pipeline] sh 00:02:23.322 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:23.337 [Pipeline] timeout 00:02:23.337 Timeout set to expire in 1 hr 0 min 00:02:23.339 [Pipeline] { 00:02:23.353 [Pipeline] sh 00:02:23.633 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:24.199 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:02:24.212 [Pipeline] sh 00:02:24.490 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:24.761 [Pipeline] sh 00:02:25.040 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:25.339 [Pipeline] sh 00:02:25.618 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:25.877 ++ readlink -f spdk_repo 00:02:25.877 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:25.877 + [[ -n /home/vagrant/spdk_repo ]] 00:02:25.877 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:25.877 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:25.877 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:25.877 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:25.877 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:25.877 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:25.877 + cd /home/vagrant/spdk_repo 00:02:25.877 + source /etc/os-release 00:02:25.877 ++ NAME='Fedora Linux' 00:02:25.877 ++ VERSION='39 (Cloud Edition)' 00:02:25.877 ++ ID=fedora 00:02:25.877 ++ VERSION_ID=39 00:02:25.877 ++ VERSION_CODENAME= 00:02:25.877 ++ PLATFORM_ID=platform:f39 00:02:25.877 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:25.877 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:25.877 ++ LOGO=fedora-logo-icon 00:02:25.877 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:25.877 ++ HOME_URL=https://fedoraproject.org/ 00:02:25.877 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:25.877 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:25.877 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:25.877 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:25.877 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:25.877 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:25.877 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:25.877 ++ SUPPORT_END=2024-11-12 00:02:25.877 ++ VARIANT='Cloud Edition' 00:02:25.877 ++ VARIANT_ID=cloud 00:02:25.877 + uname -a 00:02:25.877 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:25.877 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:26.446 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:26.446 Hugepages 00:02:26.446 node hugesize free / total 00:02:26.446 node0 1048576kB 0 / 0 00:02:26.446 node0 2048kB 0 / 0 00:02:26.446 00:02:26.446 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:26.446 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:26.446 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:26.446 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:26.446 + rm -f /tmp/spdk-ld-path 00:02:26.446 + source autorun-spdk.conf 00:02:26.446 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:26.446 ++ SPDK_TEST_NVMF=1 00:02:26.446 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:26.446 ++ SPDK_TEST_URING=1 00:02:26.446 ++ SPDK_TEST_USDT=1 00:02:26.446 ++ SPDK_RUN_UBSAN=1 00:02:26.446 ++ NET_TYPE=virt 00:02:26.446 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:26.446 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:26.446 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:26.446 ++ RUN_NIGHTLY=1 00:02:26.446 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:26.446 + [[ -n '' ]] 00:02:26.446 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:26.446 + for M in /var/spdk/build-*-manifest.txt 00:02:26.446 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:26.446 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:26.446 + for M in /var/spdk/build-*-manifest.txt 00:02:26.446 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:26.446 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:26.446 + for M in /var/spdk/build-*-manifest.txt 00:02:26.446 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:26.446 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:26.446 ++ uname 00:02:26.446 + [[ Linux == \L\i\n\u\x ]] 00:02:26.446 + sudo dmesg -T 00:02:26.446 + sudo dmesg --clear 00:02:26.446 + dmesg_pid=5940 00:02:26.446 + [[ Fedora Linux == FreeBSD ]] 00:02:26.446 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:26.446 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:26.446 + sudo dmesg -Tw 00:02:26.446 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:26.446 + [[ -x /usr/src/fio-static/fio ]] 00:02:26.446 + export FIO_BIN=/usr/src/fio-static/fio 00:02:26.446 + FIO_BIN=/usr/src/fio-static/fio 00:02:26.446 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:26.446 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:26.446 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:26.446 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:26.446 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:26.446 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:26.446 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:26.446 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:26.446 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:26.446 Test configuration: 00:02:26.446 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:26.446 SPDK_TEST_NVMF=1 00:02:26.446 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:26.446 SPDK_TEST_URING=1 00:02:26.446 SPDK_TEST_USDT=1 00:02:26.446 SPDK_RUN_UBSAN=1 00:02:26.446 NET_TYPE=virt 00:02:26.446 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:26.446 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:26.446 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:26.713 RUN_NIGHTLY=1 18:19:44 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:26.713 18:19:44 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:26.713 18:19:44 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:26.713 18:19:44 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:26.713 18:19:44 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:26.713 18:19:44 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:26.713 18:19:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.713 18:19:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.713 18:19:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.713 18:19:44 -- paths/export.sh@5 -- $ export PATH 00:02:26.713 18:19:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.713 18:19:44 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:26.713 18:19:44 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:26.713 18:19:44 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1733681984.XXXXXX 00:02:26.713 18:19:44 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1733681984.Cxc2y8 00:02:26.713 18:19:44 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:26.713 18:19:44 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:02:26.713 18:19:44 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:26.713 18:19:44 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:26.713 18:19:44 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:26.713 18:19:44 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:26.713 18:19:44 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:26.713 18:19:44 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:26.713 18:19:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:26.713 18:19:44 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:26.713 18:19:44 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:26.713 18:19:44 -- pm/common@17 -- $ local monitor 00:02:26.713 18:19:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.713 18:19:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.714 18:19:44 -- pm/common@25 -- $ sleep 1 00:02:26.714 18:19:44 -- pm/common@21 -- $ date +%s 00:02:26.714 18:19:44 -- pm/common@21 -- $ date +%s 00:02:26.714 18:19:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733681984 00:02:26.714 18:19:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733681984 00:02:26.714 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733681984_collect-cpu-load.pm.log 00:02:26.714 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733681984_collect-vmstat.pm.log 00:02:27.666 18:19:45 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:27.666 18:19:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:27.666 18:19:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:27.666 18:19:45 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:27.666 18:19:45 -- spdk/autobuild.sh@16 -- $ date -u 00:02:27.666 Sun Dec 8 06:19:45 PM UTC 2024 00:02:27.666 18:19:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:27.666 v24.09-rc1-9-gb18e1bd62 00:02:27.666 18:19:45 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:27.666 18:19:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:27.666 18:19:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:27.666 18:19:45 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:27.666 18:19:45 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:27.666 18:19:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.666 ************************************ 00:02:27.666 START TEST ubsan 00:02:27.666 ************************************ 00:02:27.666 18:19:45 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:27.666 using ubsan 00:02:27.666 00:02:27.666 real 0m0.000s 00:02:27.666 user 0m0.000s 00:02:27.666 sys 0m0.000s 00:02:27.666 18:19:45 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:27.666 18:19:45 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:27.666 ************************************ 00:02:27.666 END TEST ubsan 00:02:27.666 ************************************ 00:02:27.666 18:19:45 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:27.666 18:19:45 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:27.666 18:19:45 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:27.666 18:19:45 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:27.666 18:19:45 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:27.666 18:19:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.666 ************************************ 00:02:27.666 START TEST build_native_dpdk 00:02:27.666 ************************************ 00:02:27.666 18:19:45 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:27.666 18:19:45 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:27.666 18:19:45 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:27.666 18:19:45 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:27.666 18:19:45 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:27.666 18:19:45 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:27.666 18:19:45 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:27.666 18:19:45 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:27.666 18:19:45 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:27.666 18:19:45 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:27.666 18:19:45 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:27.666 18:19:45 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:27.666 18:19:45 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:27.666 18:19:45 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:27.666 18:19:45 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:27.666 18:19:45 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:27.666 18:19:45 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:27.666 18:19:45 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:27.666 18:19:45 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:27.666 18:19:45 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:27.666 18:19:45 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:27.666 caf0f5d395 version: 22.11.4 00:02:27.666 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:27.666 dc9c799c7d vhost: fix missing spinlock unlock 00:02:27.666 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:27.666 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:27.666 18:19:45 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:27.666 18:19:45 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:27.666 18:19:45 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:27.666 18:19:45 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:27.666 18:19:45 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:27.666 18:19:45 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:27.667 18:19:45 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:27.667 18:19:45 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:27.667 18:19:45 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:27.667 18:19:45 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:27.667 18:19:45 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:27.667 18:19:45 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:27.667 18:19:45 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:27.667 18:19:45 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:27.667 18:19:45 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:27.667 18:19:45 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:27.667 18:19:45 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:27.667 18:19:45 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:27.667 18:19:45 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:27.667 patching file config/rte_config.h 00:02:27.667 Hunk #1 succeeded at 60 (offset 1 line). 00:02:27.667 18:19:45 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:27.667 18:19:45 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:27.667 patching file lib/pcapng/rte_pcapng.c 00:02:27.667 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:27.667 18:19:45 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:27.667 18:19:45 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:27.667 18:19:45 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:27.667 18:19:45 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:27.925 18:19:45 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:27.925 18:19:45 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:27.925 18:19:45 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:33.196 The Meson build system 00:02:33.196 Version: 1.5.0 00:02:33.196 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:33.196 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:33.196 Build type: native build 00:02:33.196 Program cat found: YES (/usr/bin/cat) 00:02:33.196 Project name: DPDK 00:02:33.196 Project version: 22.11.4 00:02:33.196 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:33.196 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:33.196 Host machine cpu family: x86_64 00:02:33.196 Host machine cpu: x86_64 00:02:33.196 Message: ## Building in Developer Mode ## 00:02:33.196 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:33.196 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:33.196 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:33.196 Program objdump found: YES (/usr/bin/objdump) 00:02:33.196 Program python3 found: YES (/usr/bin/python3) 00:02:33.196 Program cat found: YES (/usr/bin/cat) 00:02:33.196 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:33.196 Checking for size of "void *" : 8 00:02:33.196 Checking for size of "void *" : 8 (cached) 00:02:33.196 Library m found: YES 00:02:33.196 Library numa found: YES 00:02:33.196 Has header "numaif.h" : YES 00:02:33.196 Library fdt found: NO 00:02:33.196 Library execinfo found: NO 00:02:33.196 Has header "execinfo.h" : YES 00:02:33.196 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:33.196 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:33.196 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:33.196 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:33.196 Run-time dependency openssl found: YES 3.1.1 00:02:33.196 Run-time dependency libpcap found: YES 1.10.4 00:02:33.196 Has header "pcap.h" with dependency libpcap: YES 00:02:33.196 Compiler for C supports arguments -Wcast-qual: YES 00:02:33.196 Compiler for C supports arguments -Wdeprecated: YES 00:02:33.196 Compiler for C supports arguments -Wformat: YES 00:02:33.196 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:33.196 Compiler for C supports arguments -Wformat-security: NO 00:02:33.196 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:33.196 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:33.196 Compiler for C supports arguments -Wnested-externs: YES 00:02:33.196 Compiler for C supports arguments -Wold-style-definition: YES 00:02:33.196 Compiler for C supports arguments -Wpointer-arith: YES 00:02:33.196 Compiler for C supports arguments -Wsign-compare: YES 00:02:33.196 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:33.196 Compiler for C supports arguments -Wundef: YES 00:02:33.196 Compiler for C supports arguments -Wwrite-strings: YES 00:02:33.196 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:33.196 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:33.196 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:33.196 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:33.196 Compiler for C supports arguments -mavx512f: YES 00:02:33.196 Checking if "AVX512 checking" compiles: YES 00:02:33.196 Fetching value of define "__SSE4_2__" : 1 00:02:33.196 Fetching value of define "__AES__" : 1 00:02:33.196 Fetching value of define "__AVX__" : 1 00:02:33.196 Fetching value of define "__AVX2__" : 1 00:02:33.196 Fetching value of define "__AVX512BW__" : (undefined) 00:02:33.196 Fetching value of define "__AVX512CD__" : (undefined) 00:02:33.196 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:33.196 Fetching value of define "__AVX512F__" : (undefined) 00:02:33.196 Fetching value of define "__AVX512VL__" : (undefined) 00:02:33.196 Fetching value of define "__PCLMUL__" : 1 00:02:33.196 Fetching value of define "__RDRND__" : 1 00:02:33.196 Fetching value of define "__RDSEED__" : 1 00:02:33.196 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:33.196 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:33.196 Message: lib/kvargs: Defining dependency "kvargs" 00:02:33.196 Message: lib/telemetry: Defining dependency "telemetry" 00:02:33.196 Checking for function "getentropy" : YES 00:02:33.196 Message: lib/eal: Defining dependency "eal" 00:02:33.196 Message: lib/ring: Defining dependency "ring" 00:02:33.196 Message: lib/rcu: Defining dependency "rcu" 00:02:33.196 Message: lib/mempool: Defining dependency "mempool" 00:02:33.196 Message: lib/mbuf: Defining dependency "mbuf" 00:02:33.196 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:33.196 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:33.196 Compiler for C supports arguments -mpclmul: YES 00:02:33.196 Compiler for C supports arguments -maes: YES 00:02:33.196 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:33.196 Compiler for C supports arguments -mavx512bw: YES 00:02:33.196 Compiler for C supports arguments -mavx512dq: YES 00:02:33.196 Compiler for C supports arguments -mavx512vl: YES 00:02:33.196 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:33.196 Compiler for C supports arguments -mavx2: YES 00:02:33.196 Compiler for C supports arguments -mavx: YES 00:02:33.196 Message: lib/net: Defining dependency "net" 00:02:33.196 Message: lib/meter: Defining dependency "meter" 00:02:33.196 Message: lib/ethdev: Defining dependency "ethdev" 00:02:33.196 Message: lib/pci: Defining dependency "pci" 00:02:33.196 Message: lib/cmdline: Defining dependency "cmdline" 00:02:33.196 Message: lib/metrics: Defining dependency "metrics" 00:02:33.196 Message: lib/hash: Defining dependency "hash" 00:02:33.196 Message: lib/timer: Defining dependency "timer" 00:02:33.196 Fetching value of define "__AVX2__" : 1 (cached) 00:02:33.196 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:33.196 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:33.196 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:33.196 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:33.196 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:33.196 Message: lib/acl: Defining dependency "acl" 00:02:33.196 Message: lib/bbdev: Defining dependency "bbdev" 00:02:33.196 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:33.196 Run-time dependency libelf found: YES 0.191 00:02:33.196 Message: lib/bpf: Defining dependency "bpf" 00:02:33.196 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:33.196 Message: lib/compressdev: Defining dependency "compressdev" 00:02:33.196 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:33.197 Message: lib/distributor: Defining dependency "distributor" 00:02:33.197 Message: lib/efd: Defining dependency "efd" 00:02:33.197 Message: lib/eventdev: Defining dependency "eventdev" 00:02:33.197 Message: lib/gpudev: Defining dependency "gpudev" 00:02:33.197 Message: lib/gro: Defining dependency "gro" 00:02:33.197 Message: lib/gso: Defining dependency "gso" 00:02:33.197 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:33.197 Message: lib/jobstats: Defining dependency "jobstats" 00:02:33.197 Message: lib/latencystats: Defining dependency "latencystats" 00:02:33.197 Message: lib/lpm: Defining dependency "lpm" 00:02:33.197 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:33.197 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:33.197 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:33.197 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:33.197 Message: lib/member: Defining dependency "member" 00:02:33.197 Message: lib/pcapng: Defining dependency "pcapng" 00:02:33.197 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:33.197 Message: lib/power: Defining dependency "power" 00:02:33.197 Message: lib/rawdev: Defining dependency "rawdev" 00:02:33.197 Message: lib/regexdev: Defining dependency "regexdev" 00:02:33.197 Message: lib/dmadev: Defining dependency "dmadev" 00:02:33.197 Message: lib/rib: Defining dependency "rib" 00:02:33.197 Message: lib/reorder: Defining dependency "reorder" 00:02:33.197 Message: lib/sched: Defining dependency "sched" 00:02:33.197 Message: lib/security: Defining dependency "security" 00:02:33.197 Message: lib/stack: Defining dependency "stack" 00:02:33.197 Has header "linux/userfaultfd.h" : YES 00:02:33.197 Message: lib/vhost: Defining dependency "vhost" 00:02:33.197 Message: lib/ipsec: Defining dependency "ipsec" 00:02:33.197 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:33.197 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:33.197 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:33.197 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:33.197 Message: lib/fib: Defining dependency "fib" 00:02:33.197 Message: lib/port: Defining dependency "port" 00:02:33.197 Message: lib/pdump: Defining dependency "pdump" 00:02:33.197 Message: lib/table: Defining dependency "table" 00:02:33.197 Message: lib/pipeline: Defining dependency "pipeline" 00:02:33.197 Message: lib/graph: Defining dependency "graph" 00:02:33.197 Message: lib/node: Defining dependency "node" 00:02:33.197 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:33.197 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:33.197 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:33.197 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:33.197 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:33.197 Compiler for C supports arguments -Wno-unused-value: YES 00:02:33.197 Compiler for C supports arguments -Wno-format: YES 00:02:33.197 Compiler for C supports arguments -Wno-format-security: YES 00:02:33.197 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:34.574 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:34.574 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:34.574 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:34.574 Fetching value of define "__AVX2__" : 1 (cached) 00:02:34.574 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:34.574 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:34.574 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:34.574 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:34.574 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:34.574 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:34.574 Configuring doxy-api.conf using configuration 00:02:34.574 Program sphinx-build found: NO 00:02:34.574 Configuring rte_build_config.h using configuration 00:02:34.574 Message: 00:02:34.574 ================= 00:02:34.574 Applications Enabled 00:02:34.574 ================= 00:02:34.574 00:02:34.574 apps: 00:02:34.574 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:34.574 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:34.574 test-security-perf, 00:02:34.574 00:02:34.574 Message: 00:02:34.574 ================= 00:02:34.574 Libraries Enabled 00:02:34.574 ================= 00:02:34.574 00:02:34.574 libs: 00:02:34.574 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:34.574 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:34.574 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:34.574 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:34.574 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:34.574 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:34.574 table, pipeline, graph, node, 00:02:34.574 00:02:34.574 Message: 00:02:34.574 =============== 00:02:34.574 Drivers Enabled 00:02:34.574 =============== 00:02:34.574 00:02:34.574 common: 00:02:34.574 00:02:34.574 bus: 00:02:34.574 pci, vdev, 00:02:34.574 mempool: 00:02:34.574 ring, 00:02:34.574 dma: 00:02:34.574 00:02:34.574 net: 00:02:34.574 i40e, 00:02:34.574 raw: 00:02:34.574 00:02:34.574 crypto: 00:02:34.574 00:02:34.574 compress: 00:02:34.574 00:02:34.574 regex: 00:02:34.574 00:02:34.574 vdpa: 00:02:34.574 00:02:34.574 event: 00:02:34.574 00:02:34.574 baseband: 00:02:34.574 00:02:34.574 gpu: 00:02:34.574 00:02:34.574 00:02:34.574 Message: 00:02:34.574 ================= 00:02:34.574 Content Skipped 00:02:34.574 ================= 00:02:34.574 00:02:34.574 apps: 00:02:34.574 00:02:34.574 libs: 00:02:34.574 kni: explicitly disabled via build config (deprecated lib) 00:02:34.574 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:34.574 00:02:34.574 drivers: 00:02:34.574 common/cpt: not in enabled drivers build config 00:02:34.574 common/dpaax: not in enabled drivers build config 00:02:34.574 common/iavf: not in enabled drivers build config 00:02:34.574 common/idpf: not in enabled drivers build config 00:02:34.574 common/mvep: not in enabled drivers build config 00:02:34.574 common/octeontx: not in enabled drivers build config 00:02:34.574 bus/auxiliary: not in enabled drivers build config 00:02:34.574 bus/dpaa: not in enabled drivers build config 00:02:34.574 bus/fslmc: not in enabled drivers build config 00:02:34.574 bus/ifpga: not in enabled drivers build config 00:02:34.574 bus/vmbus: not in enabled drivers build config 00:02:34.574 common/cnxk: not in enabled drivers build config 00:02:34.574 common/mlx5: not in enabled drivers build config 00:02:34.574 common/qat: not in enabled drivers build config 00:02:34.574 common/sfc_efx: not in enabled drivers build config 00:02:34.574 mempool/bucket: not in enabled drivers build config 00:02:34.574 mempool/cnxk: not in enabled drivers build config 00:02:34.574 mempool/dpaa: not in enabled drivers build config 00:02:34.574 mempool/dpaa2: not in enabled drivers build config 00:02:34.574 mempool/octeontx: not in enabled drivers build config 00:02:34.574 mempool/stack: not in enabled drivers build config 00:02:34.574 dma/cnxk: not in enabled drivers build config 00:02:34.574 dma/dpaa: not in enabled drivers build config 00:02:34.574 dma/dpaa2: not in enabled drivers build config 00:02:34.574 dma/hisilicon: not in enabled drivers build config 00:02:34.574 dma/idxd: not in enabled drivers build config 00:02:34.574 dma/ioat: not in enabled drivers build config 00:02:34.574 dma/skeleton: not in enabled drivers build config 00:02:34.574 net/af_packet: not in enabled drivers build config 00:02:34.574 net/af_xdp: not in enabled drivers build config 00:02:34.574 net/ark: not in enabled drivers build config 00:02:34.574 net/atlantic: not in enabled drivers build config 00:02:34.574 net/avp: not in enabled drivers build config 00:02:34.574 net/axgbe: not in enabled drivers build config 00:02:34.574 net/bnx2x: not in enabled drivers build config 00:02:34.574 net/bnxt: not in enabled drivers build config 00:02:34.574 net/bonding: not in enabled drivers build config 00:02:34.574 net/cnxk: not in enabled drivers build config 00:02:34.574 net/cxgbe: not in enabled drivers build config 00:02:34.574 net/dpaa: not in enabled drivers build config 00:02:34.574 net/dpaa2: not in enabled drivers build config 00:02:34.574 net/e1000: not in enabled drivers build config 00:02:34.574 net/ena: not in enabled drivers build config 00:02:34.574 net/enetc: not in enabled drivers build config 00:02:34.574 net/enetfec: not in enabled drivers build config 00:02:34.574 net/enic: not in enabled drivers build config 00:02:34.574 net/failsafe: not in enabled drivers build config 00:02:34.574 net/fm10k: not in enabled drivers build config 00:02:34.574 net/gve: not in enabled drivers build config 00:02:34.574 net/hinic: not in enabled drivers build config 00:02:34.574 net/hns3: not in enabled drivers build config 00:02:34.574 net/iavf: not in enabled drivers build config 00:02:34.574 net/ice: not in enabled drivers build config 00:02:34.574 net/idpf: not in enabled drivers build config 00:02:34.574 net/igc: not in enabled drivers build config 00:02:34.574 net/ionic: not in enabled drivers build config 00:02:34.574 net/ipn3ke: not in enabled drivers build config 00:02:34.574 net/ixgbe: not in enabled drivers build config 00:02:34.574 net/kni: not in enabled drivers build config 00:02:34.574 net/liquidio: not in enabled drivers build config 00:02:34.574 net/mana: not in enabled drivers build config 00:02:34.574 net/memif: not in enabled drivers build config 00:02:34.574 net/mlx4: not in enabled drivers build config 00:02:34.574 net/mlx5: not in enabled drivers build config 00:02:34.574 net/mvneta: not in enabled drivers build config 00:02:34.574 net/mvpp2: not in enabled drivers build config 00:02:34.574 net/netvsc: not in enabled drivers build config 00:02:34.574 net/nfb: not in enabled drivers build config 00:02:34.574 net/nfp: not in enabled drivers build config 00:02:34.574 net/ngbe: not in enabled drivers build config 00:02:34.574 net/null: not in enabled drivers build config 00:02:34.574 net/octeontx: not in enabled drivers build config 00:02:34.574 net/octeon_ep: not in enabled drivers build config 00:02:34.574 net/pcap: not in enabled drivers build config 00:02:34.574 net/pfe: not in enabled drivers build config 00:02:34.574 net/qede: not in enabled drivers build config 00:02:34.574 net/ring: not in enabled drivers build config 00:02:34.574 net/sfc: not in enabled drivers build config 00:02:34.574 net/softnic: not in enabled drivers build config 00:02:34.574 net/tap: not in enabled drivers build config 00:02:34.575 net/thunderx: not in enabled drivers build config 00:02:34.575 net/txgbe: not in enabled drivers build config 00:02:34.575 net/vdev_netvsc: not in enabled drivers build config 00:02:34.575 net/vhost: not in enabled drivers build config 00:02:34.575 net/virtio: not in enabled drivers build config 00:02:34.575 net/vmxnet3: not in enabled drivers build config 00:02:34.575 raw/cnxk_bphy: not in enabled drivers build config 00:02:34.575 raw/cnxk_gpio: not in enabled drivers build config 00:02:34.575 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:34.575 raw/ifpga: not in enabled drivers build config 00:02:34.575 raw/ntb: not in enabled drivers build config 00:02:34.575 raw/skeleton: not in enabled drivers build config 00:02:34.575 crypto/armv8: not in enabled drivers build config 00:02:34.575 crypto/bcmfs: not in enabled drivers build config 00:02:34.575 crypto/caam_jr: not in enabled drivers build config 00:02:34.575 crypto/ccp: not in enabled drivers build config 00:02:34.575 crypto/cnxk: not in enabled drivers build config 00:02:34.575 crypto/dpaa_sec: not in enabled drivers build config 00:02:34.575 crypto/dpaa2_sec: not in enabled drivers build config 00:02:34.575 crypto/ipsec_mb: not in enabled drivers build config 00:02:34.575 crypto/mlx5: not in enabled drivers build config 00:02:34.575 crypto/mvsam: not in enabled drivers build config 00:02:34.575 crypto/nitrox: not in enabled drivers build config 00:02:34.575 crypto/null: not in enabled drivers build config 00:02:34.575 crypto/octeontx: not in enabled drivers build config 00:02:34.575 crypto/openssl: not in enabled drivers build config 00:02:34.575 crypto/scheduler: not in enabled drivers build config 00:02:34.575 crypto/uadk: not in enabled drivers build config 00:02:34.575 crypto/virtio: not in enabled drivers build config 00:02:34.575 compress/isal: not in enabled drivers build config 00:02:34.575 compress/mlx5: not in enabled drivers build config 00:02:34.575 compress/octeontx: not in enabled drivers build config 00:02:34.575 compress/zlib: not in enabled drivers build config 00:02:34.575 regex/mlx5: not in enabled drivers build config 00:02:34.575 regex/cn9k: not in enabled drivers build config 00:02:34.575 vdpa/ifc: not in enabled drivers build config 00:02:34.575 vdpa/mlx5: not in enabled drivers build config 00:02:34.575 vdpa/sfc: not in enabled drivers build config 00:02:34.575 event/cnxk: not in enabled drivers build config 00:02:34.575 event/dlb2: not in enabled drivers build config 00:02:34.575 event/dpaa: not in enabled drivers build config 00:02:34.575 event/dpaa2: not in enabled drivers build config 00:02:34.575 event/dsw: not in enabled drivers build config 00:02:34.575 event/opdl: not in enabled drivers build config 00:02:34.575 event/skeleton: not in enabled drivers build config 00:02:34.575 event/sw: not in enabled drivers build config 00:02:34.575 event/octeontx: not in enabled drivers build config 00:02:34.575 baseband/acc: not in enabled drivers build config 00:02:34.575 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:34.575 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:34.575 baseband/la12xx: not in enabled drivers build config 00:02:34.575 baseband/null: not in enabled drivers build config 00:02:34.575 baseband/turbo_sw: not in enabled drivers build config 00:02:34.575 gpu/cuda: not in enabled drivers build config 00:02:34.575 00:02:34.575 00:02:34.575 Build targets in project: 314 00:02:34.575 00:02:34.575 DPDK 22.11.4 00:02:34.575 00:02:34.575 User defined options 00:02:34.575 libdir : lib 00:02:34.575 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:34.575 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:34.575 c_link_args : 00:02:34.575 enable_docs : false 00:02:34.575 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:34.575 enable_kmods : false 00:02:34.575 machine : native 00:02:34.575 tests : false 00:02:34.575 00:02:34.575 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:34.575 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:34.575 18:19:52 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:34.575 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:34.575 [1/743] Generating lib/rte_kvargs_def with a custom command 00:02:34.575 [2/743] Generating lib/rte_telemetry_def with a custom command 00:02:34.575 [3/743] Generating lib/rte_kvargs_mingw with a custom command 00:02:34.575 [4/743] Generating lib/rte_telemetry_mingw with a custom command 00:02:34.575 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:34.575 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:34.575 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:34.575 [8/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:34.575 [9/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:34.835 [10/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:34.835 [11/743] Linking static target lib/librte_kvargs.a 00:02:34.835 [12/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:34.835 [13/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:34.835 [14/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:34.835 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:34.835 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:34.835 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:34.835 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:34.835 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:34.835 [20/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.835 [21/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:35.094 [22/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:35.094 [23/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:35.094 [24/743] Linking target lib/librte_kvargs.so.23.0 00:02:35.094 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:35.094 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:35.094 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:35.094 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:35.094 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:35.094 [30/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:35.094 [31/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:35.094 [32/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:35.094 [33/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:35.094 [34/743] Linking static target lib/librte_telemetry.a 00:02:35.352 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:35.352 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:35.352 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:35.352 [38/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:35.352 [39/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:35.352 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:35.352 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:35.352 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:35.611 [43/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.611 [44/743] Linking target lib/librte_telemetry.so.23.0 00:02:35.611 [45/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:35.611 [46/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:35.611 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:35.611 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:35.611 [49/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:35.611 [50/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:35.611 [51/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:35.611 [52/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:35.870 [53/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:35.870 [54/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:35.870 [55/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:35.870 [56/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:35.870 [57/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:35.870 [58/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:35.870 [59/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:35.870 [60/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:35.870 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:35.870 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:35.870 [63/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:35.870 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:35.870 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:35.870 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:35.870 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:36.129 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:36.129 [69/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:36.129 [70/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:36.129 [71/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:36.129 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:36.129 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:36.129 [74/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:36.129 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:36.129 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:36.129 [77/743] Generating lib/rte_eal_def with a custom command 00:02:36.129 [78/743] Generating lib/rte_eal_mingw with a custom command 00:02:36.129 [79/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:36.129 [80/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:36.129 [81/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:36.129 [82/743] Generating lib/rte_ring_def with a custom command 00:02:36.129 [83/743] Generating lib/rte_ring_mingw with a custom command 00:02:36.129 [84/743] Generating lib/rte_rcu_def with a custom command 00:02:36.129 [85/743] Generating lib/rte_rcu_mingw with a custom command 00:02:36.129 [86/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:36.388 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:36.388 [88/743] Linking static target lib/librte_ring.a 00:02:36.388 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:36.388 [90/743] Generating lib/rte_mempool_def with a custom command 00:02:36.388 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:02:36.388 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:36.388 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:36.646 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.646 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:36.646 [96/743] Linking static target lib/librte_eal.a 00:02:36.904 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:36.904 [98/743] Generating lib/rte_mbuf_def with a custom command 00:02:36.904 [99/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:36.904 [100/743] Generating lib/rte_mbuf_mingw with a custom command 00:02:36.904 [101/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:36.904 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:36.904 [103/743] Linking static target lib/librte_rcu.a 00:02:37.163 [104/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:37.163 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:37.421 [106/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.421 [107/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:37.421 [108/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:37.421 [109/743] Generating lib/rte_net_def with a custom command 00:02:37.421 [110/743] Linking static target lib/librte_mempool.a 00:02:37.421 [111/743] Generating lib/rte_net_mingw with a custom command 00:02:37.421 [112/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:37.421 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:37.421 [114/743] Generating lib/rte_meter_def with a custom command 00:02:37.421 [115/743] Generating lib/rte_meter_mingw with a custom command 00:02:37.421 [116/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:37.421 [117/743] Linking static target lib/librte_meter.a 00:02:37.421 [118/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:37.679 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:37.679 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:37.679 [121/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.937 [122/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:37.937 [123/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:37.937 [124/743] Linking static target lib/librte_mbuf.a 00:02:37.937 [125/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:37.937 [126/743] Linking static target lib/librte_net.a 00:02:38.196 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.196 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.196 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:38.196 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:38.454 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:38.454 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:38.454 [133/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.454 [134/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:38.712 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:38.971 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:38.971 [137/743] Generating lib/rte_ethdev_def with a custom command 00:02:38.971 [138/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:38.971 [139/743] Generating lib/rte_ethdev_mingw with a custom command 00:02:38.971 [140/743] Generating lib/rte_pci_def with a custom command 00:02:39.229 [141/743] Generating lib/rte_pci_mingw with a custom command 00:02:39.229 [142/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:39.229 [143/743] Linking static target lib/librte_pci.a 00:02:39.229 [144/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:39.229 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:39.229 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:39.229 [147/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:39.229 [148/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:39.229 [149/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:39.229 [150/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.488 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:39.488 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:39.488 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:39.488 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:39.488 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:39.488 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:39.488 [157/743] Generating lib/rte_cmdline_def with a custom command 00:02:39.488 [158/743] Generating lib/rte_cmdline_mingw with a custom command 00:02:39.488 [159/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:39.488 [160/743] Generating lib/rte_metrics_def with a custom command 00:02:39.488 [161/743] Generating lib/rte_metrics_mingw with a custom command 00:02:39.747 [162/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:39.747 [163/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:39.747 [164/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:39.747 [165/743] Generating lib/rte_hash_def with a custom command 00:02:39.747 [166/743] Generating lib/rte_hash_mingw with a custom command 00:02:39.747 [167/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:39.747 [168/743] Generating lib/rte_timer_def with a custom command 00:02:39.747 [169/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:39.747 [170/743] Generating lib/rte_timer_mingw with a custom command 00:02:39.747 [171/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:39.747 [172/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:39.747 [173/743] Linking static target lib/librte_cmdline.a 00:02:40.005 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:40.005 [175/743] Linking static target lib/librte_metrics.a 00:02:40.264 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:40.264 [177/743] Linking static target lib/librte_timer.a 00:02:40.523 [178/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.523 [179/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.523 [180/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:40.523 [181/743] Linking static target lib/librte_ethdev.a 00:02:40.781 [182/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:40.781 [183/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:40.781 [184/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.348 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:41.348 [186/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:41.348 [187/743] Generating lib/rte_acl_def with a custom command 00:02:41.348 [188/743] Generating lib/rte_acl_mingw with a custom command 00:02:41.348 [189/743] Generating lib/rte_bbdev_def with a custom command 00:02:41.348 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:02:41.349 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:41.349 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:02:41.349 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:02:41.607 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:41.916 [195/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:42.175 [196/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:42.175 [197/743] Linking static target lib/librte_bitratestats.a 00:02:42.175 [198/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:42.175 [199/743] Linking static target lib/librte_bbdev.a 00:02:42.175 [200/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.433 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:42.433 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:42.691 [203/743] Linking static target lib/librte_hash.a 00:02:42.691 [204/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:42.691 [205/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.691 [206/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:42.691 [207/743] Linking static target lib/acl/libavx512_tmp.a 00:02:42.952 [208/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:42.952 [209/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:43.210 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.210 [211/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:43.210 [212/743] Generating lib/rte_bpf_def with a custom command 00:02:43.210 [213/743] Generating lib/rte_bpf_mingw with a custom command 00:02:43.210 [214/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:43.468 [215/743] Generating lib/rte_cfgfile_def with a custom command 00:02:43.468 [216/743] Generating lib/rte_cfgfile_mingw with a custom command 00:02:43.468 [217/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:43.468 [218/743] Linking static target lib/librte_acl.a 00:02:43.468 [219/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:43.726 [220/743] Linking static target lib/librte_cfgfile.a 00:02:43.726 [221/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:43.726 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:43.726 [223/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.726 [224/743] Generating lib/rte_compressdev_def with a custom command 00:02:43.726 [225/743] Generating lib/rte_compressdev_mingw with a custom command 00:02:43.726 [226/743] Linking target lib/librte_eal.so.23.0 00:02:43.726 [227/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.984 [228/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.984 [229/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:43.984 [230/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:43.984 [231/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:43.984 [232/743] Linking target lib/librte_ring.so.23.0 00:02:43.984 [233/743] Linking target lib/librte_meter.so.23.0 00:02:43.984 [234/743] Linking target lib/librte_pci.so.23.0 00:02:43.984 [235/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:43.984 [236/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:44.242 [237/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:44.242 [238/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:44.242 [239/743] Linking target lib/librte_rcu.so.23.0 00:02:44.242 [240/743] Linking target lib/librte_mempool.so.23.0 00:02:44.242 [241/743] Linking target lib/librte_timer.so.23.0 00:02:44.242 [242/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:44.242 [243/743] Linking static target lib/librte_bpf.a 00:02:44.242 [244/743] Linking target lib/librte_acl.so.23.0 00:02:44.242 [245/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:44.242 [246/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:44.242 [247/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:44.242 [248/743] Linking target lib/librte_cfgfile.so.23.0 00:02:44.242 [249/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:44.242 [250/743] Generating lib/rte_cryptodev_def with a custom command 00:02:44.242 [251/743] Linking target lib/librte_mbuf.so.23.0 00:02:44.242 [252/743] Generating lib/rte_cryptodev_mingw with a custom command 00:02:44.500 [253/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:44.500 [254/743] Generating lib/rte_distributor_def with a custom command 00:02:44.500 [255/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:44.500 [256/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:44.500 [257/743] Linking static target lib/librte_compressdev.a 00:02:44.500 [258/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.500 [259/743] Generating lib/rte_distributor_mingw with a custom command 00:02:44.500 [260/743] Linking target lib/librte_net.so.23.0 00:02:44.500 [261/743] Linking target lib/librte_bbdev.so.23.0 00:02:44.500 [262/743] Generating lib/rte_efd_def with a custom command 00:02:44.500 [263/743] Generating lib/rte_efd_mingw with a custom command 00:02:44.500 [264/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:44.758 [265/743] Linking target lib/librte_cmdline.so.23.0 00:02:44.758 [266/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:44.758 [267/743] Linking target lib/librte_hash.so.23.0 00:02:44.758 [268/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:45.017 [269/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:45.017 [270/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:45.275 [271/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:45.275 [272/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.275 [273/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.275 [274/743] Linking target lib/librte_ethdev.so.23.0 00:02:45.275 [275/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:45.275 [276/743] Linking static target lib/librte_distributor.a 00:02:45.275 [277/743] Linking target lib/librte_compressdev.so.23.0 00:02:45.275 [278/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:45.534 [279/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:45.534 [280/743] Linking target lib/librte_metrics.so.23.0 00:02:45.534 [281/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.534 [282/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:45.534 [283/743] Linking target lib/librte_bpf.so.23.0 00:02:45.534 [284/743] Linking target lib/librte_bitratestats.so.23.0 00:02:45.793 [285/743] Linking target lib/librte_distributor.so.23.0 00:02:45.793 [286/743] Generating lib/rte_eventdev_def with a custom command 00:02:45.793 [287/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:45.793 [288/743] Generating lib/rte_eventdev_mingw with a custom command 00:02:45.793 [289/743] Generating lib/rte_gpudev_def with a custom command 00:02:45.793 [290/743] Generating lib/rte_gpudev_mingw with a custom command 00:02:45.793 [291/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:46.052 [292/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:46.052 [293/743] Linking static target lib/librte_efd.a 00:02:46.311 [294/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.311 [295/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:46.311 [296/743] Linking target lib/librte_efd.so.23.0 00:02:46.311 [297/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:46.311 [298/743] Linking static target lib/librte_cryptodev.a 00:02:46.569 [299/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:46.569 [300/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:46.569 [301/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:46.569 [302/743] Generating lib/rte_gro_def with a custom command 00:02:46.569 [303/743] Linking static target lib/librte_gpudev.a 00:02:46.569 [304/743] Generating lib/rte_gro_mingw with a custom command 00:02:46.569 [305/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:46.828 [306/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:46.828 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:47.086 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:47.086 [309/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:47.345 [310/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:47.345 [311/743] Generating lib/rte_gso_def with a custom command 00:02:47.345 [312/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:47.345 [313/743] Generating lib/rte_gso_mingw with a custom command 00:02:47.345 [314/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:47.345 [315/743] Linking static target lib/librte_gro.a 00:02:47.345 [316/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.345 [317/743] Linking target lib/librte_gpudev.so.23.0 00:02:47.603 [318/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:47.603 [319/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:47.603 [320/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.603 [321/743] Linking target lib/librte_gro.so.23.0 00:02:47.603 [322/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:47.603 [323/743] Generating lib/rte_ip_frag_def with a custom command 00:02:47.603 [324/743] Generating lib/rte_ip_frag_mingw with a custom command 00:02:47.862 [325/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:47.862 [326/743] Linking static target lib/librte_eventdev.a 00:02:47.862 [327/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:47.862 [328/743] Linking static target lib/librte_gso.a 00:02:47.862 [329/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:47.862 [330/743] Linking static target lib/librte_jobstats.a 00:02:47.862 [331/743] Generating lib/rte_jobstats_def with a custom command 00:02:47.862 [332/743] Generating lib/rte_jobstats_mingw with a custom command 00:02:47.862 [333/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.121 [334/743] Linking target lib/librte_gso.so.23.0 00:02:48.121 [335/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:48.121 [336/743] Generating lib/rte_latencystats_def with a custom command 00:02:48.121 [337/743] Generating lib/rte_latencystats_mingw with a custom command 00:02:48.121 [338/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:48.121 [339/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.121 [340/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:48.379 [341/743] Generating lib/rte_lpm_def with a custom command 00:02:48.379 [342/743] Linking target lib/librte_jobstats.so.23.0 00:02:48.379 [343/743] Generating lib/rte_lpm_mingw with a custom command 00:02:48.379 [344/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:48.379 [345/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:48.379 [346/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:48.379 [347/743] Linking static target lib/librte_ip_frag.a 00:02:48.636 [348/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.636 [349/743] Linking target lib/librte_cryptodev.so.23.0 00:02:48.636 [350/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:48.893 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.893 [352/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:48.893 [353/743] Linking static target lib/librte_latencystats.a 00:02:48.893 [354/743] Linking target lib/librte_ip_frag.so.23.0 00:02:48.893 [355/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:48.893 [356/743] Generating lib/rte_member_def with a custom command 00:02:48.893 [357/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:48.893 [358/743] Generating lib/rte_member_mingw with a custom command 00:02:48.893 [359/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:48.893 [360/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.151 [361/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:49.151 [362/743] Generating lib/rte_pcapng_def with a custom command 00:02:49.151 [363/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:49.151 [364/743] Generating lib/rte_pcapng_mingw with a custom command 00:02:49.151 [365/743] Linking target lib/librte_latencystats.so.23.0 00:02:49.151 [366/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:49.151 [367/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:49.151 [368/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:49.410 [369/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:49.410 [370/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:49.668 [371/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:49.668 [372/743] Linking static target lib/librte_lpm.a 00:02:49.668 [373/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:49.668 [374/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:49.668 [375/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.668 [376/743] Generating lib/rte_power_def with a custom command 00:02:49.668 [377/743] Generating lib/rte_power_mingw with a custom command 00:02:49.668 [378/743] Linking target lib/librte_eventdev.so.23.0 00:02:49.668 [379/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:49.926 [380/743] Generating lib/rte_rawdev_def with a custom command 00:02:49.926 [381/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:49.926 [382/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:49.926 [383/743] Generating lib/rte_rawdev_mingw with a custom command 00:02:49.926 [384/743] Generating lib/rte_regexdev_def with a custom command 00:02:49.926 [385/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.926 [386/743] Generating lib/rte_regexdev_mingw with a custom command 00:02:49.926 [387/743] Generating lib/rte_dmadev_def with a custom command 00:02:49.926 [388/743] Linking target lib/librte_lpm.so.23.0 00:02:49.926 [389/743] Generating lib/rte_dmadev_mingw with a custom command 00:02:49.926 [390/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:49.926 [391/743] Linking static target lib/librte_pcapng.a 00:02:49.926 [392/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:49.926 [393/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:49.926 [394/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:49.926 [395/743] Linking static target lib/librte_rawdev.a 00:02:50.185 [396/743] Generating lib/rte_rib_def with a custom command 00:02:50.185 [397/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:50.185 [398/743] Generating lib/rte_rib_mingw with a custom command 00:02:50.185 [399/743] Generating lib/rte_reorder_def with a custom command 00:02:50.185 [400/743] Generating lib/rte_reorder_mingw with a custom command 00:02:50.185 [401/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.185 [402/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:50.185 [403/743] Linking static target lib/librte_dmadev.a 00:02:50.185 [404/743] Linking target lib/librte_pcapng.so.23.0 00:02:50.443 [405/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:50.443 [406/743] Linking static target lib/librte_power.a 00:02:50.443 [407/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:50.443 [408/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.443 [409/743] Linking target lib/librte_rawdev.so.23.0 00:02:50.443 [410/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:50.702 [411/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:50.702 [412/743] Linking static target lib/librte_regexdev.a 00:02:50.702 [413/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:50.702 [414/743] Generating lib/rte_sched_def with a custom command 00:02:50.702 [415/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:50.702 [416/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:50.702 [417/743] Generating lib/rte_sched_mingw with a custom command 00:02:50.702 [418/743] Linking static target lib/librte_member.a 00:02:50.702 [419/743] Generating lib/rte_security_def with a custom command 00:02:50.702 [420/743] Generating lib/rte_security_mingw with a custom command 00:02:50.702 [421/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:50.702 [422/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.960 [423/743] Linking target lib/librte_dmadev.so.23.0 00:02:50.960 [424/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:50.960 [425/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:50.960 [426/743] Generating lib/rte_stack_def with a custom command 00:02:50.960 [427/743] Generating lib/rte_stack_mingw with a custom command 00:02:50.960 [428/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:50.960 [429/743] Linking static target lib/librte_stack.a 00:02:50.960 [430/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:50.960 [431/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:50.960 [432/743] Linking static target lib/librte_reorder.a 00:02:50.960 [433/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.960 [434/743] Linking target lib/librte_member.so.23.0 00:02:50.960 [435/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:51.219 [436/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.219 [437/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:51.219 [438/743] Linking static target lib/librte_rib.a 00:02:51.219 [439/743] Linking target lib/librte_stack.so.23.0 00:02:51.219 [440/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.219 [441/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.219 [442/743] Linking target lib/librte_reorder.so.23.0 00:02:51.219 [443/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.219 [444/743] Linking target lib/librte_power.so.23.0 00:02:51.219 [445/743] Linking target lib/librte_regexdev.so.23.0 00:02:51.477 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:51.477 [447/743] Linking static target lib/librte_security.a 00:02:51.477 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.735 [449/743] Linking target lib/librte_rib.so.23.0 00:02:51.735 [450/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:51.735 [451/743] Generating lib/rte_vhost_def with a custom command 00:02:51.735 [452/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:51.735 [453/743] Generating lib/rte_vhost_mingw with a custom command 00:02:51.735 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:51.994 [455/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.994 [456/743] Linking target lib/librte_security.so.23.0 00:02:51.994 [457/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:51.994 [458/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:52.252 [459/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:52.252 [460/743] Linking static target lib/librte_sched.a 00:02:52.511 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.511 [462/743] Linking target lib/librte_sched.so.23.0 00:02:52.511 [463/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:52.511 [464/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:52.769 [465/743] Generating lib/rte_ipsec_def with a custom command 00:02:52.769 [466/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:52.769 [467/743] Generating lib/rte_ipsec_mingw with a custom command 00:02:52.769 [468/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:52.769 [469/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:52.769 [470/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:52.769 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:53.335 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:53.335 [473/743] Generating lib/rte_fib_def with a custom command 00:02:53.335 [474/743] Generating lib/rte_fib_mingw with a custom command 00:02:53.335 [475/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:53.335 [476/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:53.335 [477/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:53.335 [478/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:53.335 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:53.593 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:53.594 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:53.594 [482/743] Linking static target lib/librte_ipsec.a 00:02:54.160 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.160 [484/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:54.160 [485/743] Linking target lib/librte_ipsec.so.23.0 00:02:54.160 [486/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:54.160 [487/743] Linking static target lib/librte_fib.a 00:02:54.427 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:54.427 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:54.427 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:54.427 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:54.427 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.427 [493/743] Linking target lib/librte_fib.so.23.0 00:02:54.784 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:55.042 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:55.042 [496/743] Generating lib/rte_port_def with a custom command 00:02:55.042 [497/743] Generating lib/rte_port_mingw with a custom command 00:02:55.300 [498/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:55.300 [499/743] Generating lib/rte_pdump_def with a custom command 00:02:55.300 [500/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:55.300 [501/743] Generating lib/rte_pdump_mingw with a custom command 00:02:55.300 [502/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:55.558 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:55.558 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:55.558 [505/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:55.558 [506/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:55.558 [507/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:55.558 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:55.816 [509/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:55.816 [510/743] Linking static target lib/librte_port.a 00:02:56.075 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:56.333 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:56.333 [513/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:56.333 [514/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.333 [515/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:56.333 [516/743] Linking target lib/librte_port.so.23.0 00:02:56.333 [517/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:56.333 [518/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:56.333 [519/743] Linking static target lib/librte_pdump.a 00:02:56.333 [520/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:56.591 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.850 [522/743] Linking target lib/librte_pdump.so.23.0 00:02:56.850 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:56.850 [524/743] Generating lib/rte_table_def with a custom command 00:02:56.850 [525/743] Generating lib/rte_table_mingw with a custom command 00:02:57.109 [526/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:57.109 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:57.109 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:57.370 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:57.370 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:57.628 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:57.628 [532/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:57.628 [533/743] Generating lib/rte_pipeline_def with a custom command 00:02:57.628 [534/743] Generating lib/rte_pipeline_mingw with a custom command 00:02:57.628 [535/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:57.628 [536/743] Linking static target lib/librte_table.a 00:02:57.886 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:57.886 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:58.145 [539/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.404 [540/743] Linking target lib/librte_table.so.23.0 00:02:58.404 [541/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:58.404 [542/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:58.404 [543/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:58.404 [544/743] Generating lib/rte_graph_def with a custom command 00:02:58.404 [545/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:58.404 [546/743] Generating lib/rte_graph_mingw with a custom command 00:02:58.404 [547/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:58.663 [548/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:58.922 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:58.922 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:58.922 [551/743] Linking static target lib/librte_graph.a 00:02:59.181 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:59.181 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:59.181 [554/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:59.181 [555/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:59.749 [556/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:59.749 [557/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:59.749 [558/743] Generating lib/rte_node_def with a custom command 00:02:59.749 [559/743] Generating lib/rte_node_mingw with a custom command 00:02:59.749 [560/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:59.749 [561/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.749 [562/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:59.749 [563/743] Linking target lib/librte_graph.so.23.0 00:02:59.749 [564/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:00.008 [565/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:00.008 [566/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:00.008 [567/743] Generating drivers/rte_bus_pci_def with a custom command 00:03:00.008 [568/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:00.008 [569/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:00.008 [570/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:00.008 [571/743] Generating drivers/rte_bus_vdev_def with a custom command 00:03:00.008 [572/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:00.008 [573/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:00.008 [574/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:00.008 [575/743] Generating drivers/rte_mempool_ring_def with a custom command 00:03:00.008 [576/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:00.267 [577/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:00.267 [578/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:00.267 [579/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:00.525 [580/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:00.525 [581/743] Linking static target lib/librte_node.a 00:03:00.525 [582/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:00.525 [583/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:00.525 [584/743] Linking static target drivers/librte_bus_vdev.a 00:03:00.525 [585/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:00.525 [586/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:00.784 [587/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:00.784 [588/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.784 [589/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:00.784 [590/743] Linking static target drivers/librte_bus_pci.a 00:03:00.784 [591/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.784 [592/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:00.784 [593/743] Linking target lib/librte_node.so.23.0 00:03:00.784 [594/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:00.784 [595/743] Linking target drivers/librte_bus_vdev.so.23.0 00:03:00.784 [596/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:01.043 [597/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:01.043 [598/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.043 [599/743] Linking target drivers/librte_bus_pci.so.23.0 00:03:01.043 [600/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:01.302 [601/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:01.302 [602/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:01.302 [603/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:01.302 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:01.561 [605/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:01.561 [606/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:01.561 [607/743] Linking static target drivers/librte_mempool_ring.a 00:03:01.561 [608/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:01.561 [609/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:01.561 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:03:02.128 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:02.386 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:02.386 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:02.386 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:02.645 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:03.212 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:03.212 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:03.212 [618/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:03.212 [619/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:03.779 [620/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:03.780 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:03.780 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:03:03.780 [623/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:03.780 [624/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:04.038 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:04.605 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:05.172 [627/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:05.172 [628/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:05.172 [629/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:05.172 [630/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:05.172 [631/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:05.172 [632/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:05.172 [633/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:05.172 [634/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:05.431 [635/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:05.689 [636/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:05.948 [637/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:05.948 [638/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:05.948 [639/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:05.948 [640/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:06.207 [641/743] Linking static target lib/librte_vhost.a 00:03:06.207 [642/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:06.207 [643/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:06.207 [644/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:06.466 [645/743] Linking static target drivers/librte_net_i40e.a 00:03:06.466 [646/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:06.466 [647/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:06.725 [648/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:06.725 [649/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:06.983 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:06.983 [651/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:06.983 [652/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.983 [653/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:06.983 [654/743] Linking target drivers/librte_net_i40e.so.23.0 00:03:07.241 [655/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.241 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:07.241 [657/743] Linking target lib/librte_vhost.so.23.0 00:03:07.241 [658/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:07.500 [659/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:07.758 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:08.017 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:08.017 [662/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:08.017 [663/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:08.017 [664/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:08.017 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:08.274 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:08.274 [667/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:08.274 [668/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:08.274 [669/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:08.532 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:08.791 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:08.791 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:09.050 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:09.309 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:09.566 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:09.566 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:09.824 [677/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:09.824 [678/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:09.824 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:10.083 [680/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:10.083 [681/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:10.083 [682/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:10.651 [683/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:10.651 [684/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:10.651 [685/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:10.651 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:10.651 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:10.651 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:10.910 [689/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:10.910 [690/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:11.170 [691/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:11.170 [692/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:11.170 [693/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:11.170 [694/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:11.429 [695/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:11.429 [696/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:12.062 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:12.062 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:12.062 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:12.321 [700/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:12.321 [701/743] Linking static target lib/librte_pipeline.a 00:03:12.321 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:12.321 [703/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:12.580 [704/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:12.840 [705/743] Linking target app/dpdk-dumpcap 00:03:12.840 [706/743] Linking target app/dpdk-pdump 00:03:12.840 [707/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:12.840 [708/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:13.100 [709/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:13.100 [710/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:13.100 [711/743] Linking target app/dpdk-proc-info 00:03:13.100 [712/743] Linking target app/dpdk-test-acl 00:03:13.100 [713/743] Linking target app/dpdk-test-bbdev 00:03:13.359 [714/743] Linking target app/dpdk-test-cmdline 00:03:13.359 [715/743] Linking target app/dpdk-test-compress-perf 00:03:13.359 [716/743] Linking target app/dpdk-test-crypto-perf 00:03:13.618 [717/743] Linking target app/dpdk-test-fib 00:03:13.618 [718/743] Linking target app/dpdk-test-eventdev 00:03:13.618 [719/743] Linking target app/dpdk-test-flow-perf 00:03:13.618 [720/743] Linking target app/dpdk-test-gpudev 00:03:13.618 [721/743] Linking target app/dpdk-test-pipeline 00:03:14.184 [722/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:14.184 [723/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:14.184 [724/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:14.442 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:14.442 [726/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:14.442 [727/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:14.700 [728/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.700 [729/743] Linking target lib/librte_pipeline.so.23.0 00:03:14.959 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:15.218 [731/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:15.218 [732/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:15.218 [733/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:15.218 [734/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:15.477 [735/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:15.477 [736/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:15.477 [737/743] Linking target app/dpdk-test-sad 00:03:15.736 [738/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:15.736 [739/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:15.736 [740/743] Linking target app/dpdk-test-regex 00:03:16.008 [741/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:16.008 [742/743] Linking target app/dpdk-testpmd 00:03:16.268 [743/743] Linking target app/dpdk-test-security-perf 00:03:16.268 18:20:34 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:16.268 18:20:34 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:16.268 18:20:34 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:16.527 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:16.527 [0/1] Installing files. 00:03:16.789 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.789 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:16.790 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:16.791 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:16.792 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:16.793 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:16.793 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.793 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.793 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.793 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.793 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.793 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.793 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.793 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.793 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.793 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.793 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.793 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.793 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.793 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.793 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.052 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:17.053 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:17.053 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:17.053 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.053 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:17.053 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.053 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.053 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.053 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.053 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.053 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.053 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.053 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.053 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.053 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.053 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.053 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.053 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.053 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.053 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.053 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.053 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.053 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.054 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.054 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.054 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.054 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.054 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.054 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.054 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.054 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.054 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.054 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.054 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.054 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.054 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.054 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.054 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.054 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.054 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.054 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.054 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.054 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.054 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.054 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.054 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.054 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.054 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.314 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.315 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:17.316 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:17.316 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:17.316 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:17.316 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:17.316 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:17.316 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:17.316 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:17.316 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:17.316 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:17.316 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:17.316 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:17.316 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:17.316 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:17.316 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:17.316 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:17.316 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:17.316 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:17.316 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:17.316 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:17.316 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:17.316 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:17.316 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:17.316 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:17.316 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:17.316 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:17.316 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:17.316 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:17.316 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:17.316 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:17.316 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:17.316 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:17.316 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:17.316 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:17.316 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:17.316 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:17.316 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:17.316 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:17.316 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:17.316 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:17.316 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:17.316 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:17.316 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:17.316 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:17.316 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:17.316 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:17.316 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:17.316 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:17.316 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:17.316 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:17.316 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:17.316 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:17.316 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:17.316 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:17.317 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:17.317 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:17.317 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:17.317 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:17.317 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:17.317 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:17.317 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:17.317 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:17.317 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:17.317 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:17.317 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:17.317 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:17.317 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:17.317 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:17.317 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:17.317 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:17.317 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:17.317 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:17.317 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:17.317 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:17.317 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:17.317 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:17.317 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:17.317 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:17.317 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:17.317 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:17.317 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:17.317 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:17.317 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:17.317 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:17.317 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:17.317 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:17.317 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:17.317 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:17.317 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:17.317 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:17.317 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:17.317 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:17.317 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:17.317 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:17.317 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:17.317 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:17.317 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:17.317 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:17.317 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:17.317 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:17.317 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:17.317 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:17.317 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:17.317 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:17.317 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:17.317 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:17.317 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:17.317 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:17.317 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:17.317 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:17.317 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:17.317 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:17.317 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:17.317 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:17.317 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:17.317 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:17.317 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:17.317 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:17.317 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:17.317 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:17.317 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:17.317 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:17.317 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:17.317 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:17.317 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:17.317 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:17.317 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:17.317 18:20:35 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:17.317 18:20:35 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:17.317 00:03:17.317 real 0m49.607s 00:03:17.317 user 5m47.891s 00:03:17.317 sys 0m57.998s 00:03:17.317 18:20:35 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:17.317 18:20:35 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:17.317 ************************************ 00:03:17.317 END TEST build_native_dpdk 00:03:17.317 ************************************ 00:03:17.317 18:20:35 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:17.317 18:20:35 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:17.317 18:20:35 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:17.317 18:20:35 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:17.317 18:20:35 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:17.317 18:20:35 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:17.317 18:20:35 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:17.317 18:20:35 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:17.575 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:17.575 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.575 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:17.575 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:17.840 Using 'verbs' RDMA provider 00:03:33.648 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:45.852 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:45.852 Creating mk/config.mk...done. 00:03:45.852 Creating mk/cc.flags.mk...done. 00:03:45.852 Type 'make' to build. 00:03:45.852 18:21:03 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:45.852 18:21:03 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:45.852 18:21:03 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:45.852 18:21:03 -- common/autotest_common.sh@10 -- $ set +x 00:03:45.852 ************************************ 00:03:45.852 START TEST make 00:03:45.852 ************************************ 00:03:45.852 18:21:03 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:45.852 make[1]: Nothing to be done for 'all'. 00:04:42.129 CC lib/log/log.o 00:04:42.129 CC lib/ut/ut.o 00:04:42.129 CC lib/log/log_flags.o 00:04:42.129 CC lib/log/log_deprecated.o 00:04:42.129 CC lib/ut_mock/mock.o 00:04:42.129 LIB libspdk_ut.a 00:04:42.129 SO libspdk_ut.so.2.0 00:04:42.129 LIB libspdk_log.a 00:04:42.129 LIB libspdk_ut_mock.a 00:04:42.129 SYMLINK libspdk_ut.so 00:04:42.129 SO libspdk_log.so.7.0 00:04:42.129 SO libspdk_ut_mock.so.6.0 00:04:42.129 SYMLINK libspdk_ut_mock.so 00:04:42.129 SYMLINK libspdk_log.so 00:04:42.129 CXX lib/trace_parser/trace.o 00:04:42.129 CC lib/ioat/ioat.o 00:04:42.129 CC lib/dma/dma.o 00:04:42.129 CC lib/util/base64.o 00:04:42.129 CC lib/util/bit_array.o 00:04:42.129 CC lib/util/crc16.o 00:04:42.129 CC lib/util/cpuset.o 00:04:42.129 CC lib/util/crc32.o 00:04:42.129 CC lib/util/crc32c.o 00:04:42.129 CC lib/vfio_user/host/vfio_user_pci.o 00:04:42.129 CC lib/util/crc32_ieee.o 00:04:42.129 CC lib/util/crc64.o 00:04:42.129 CC lib/util/dif.o 00:04:42.129 CC lib/util/fd.o 00:04:42.129 LIB libspdk_dma.a 00:04:42.129 CC lib/vfio_user/host/vfio_user.o 00:04:42.129 CC lib/util/fd_group.o 00:04:42.129 SO libspdk_dma.so.5.0 00:04:42.129 LIB libspdk_ioat.a 00:04:42.129 CC lib/util/file.o 00:04:42.129 CC lib/util/hexlify.o 00:04:42.129 SYMLINK libspdk_dma.so 00:04:42.129 SO libspdk_ioat.so.7.0 00:04:42.129 CC lib/util/iov.o 00:04:42.129 CC lib/util/math.o 00:04:42.129 SYMLINK libspdk_ioat.so 00:04:42.129 CC lib/util/net.o 00:04:42.129 CC lib/util/pipe.o 00:04:42.129 LIB libspdk_vfio_user.a 00:04:42.129 CC lib/util/strerror_tls.o 00:04:42.129 CC lib/util/string.o 00:04:42.129 SO libspdk_vfio_user.so.5.0 00:04:42.129 CC lib/util/uuid.o 00:04:42.129 CC lib/util/xor.o 00:04:42.129 CC lib/util/zipf.o 00:04:42.129 SYMLINK libspdk_vfio_user.so 00:04:42.129 CC lib/util/md5.o 00:04:42.129 LIB libspdk_util.a 00:04:42.129 SO libspdk_util.so.10.0 00:04:42.129 LIB libspdk_trace_parser.a 00:04:42.129 SYMLINK libspdk_util.so 00:04:42.129 SO libspdk_trace_parser.so.6.0 00:04:42.129 SYMLINK libspdk_trace_parser.so 00:04:42.129 CC lib/conf/conf.o 00:04:42.129 CC lib/json/json_parse.o 00:04:42.129 CC lib/rdma_provider/common.o 00:04:42.129 CC lib/env_dpdk/env.o 00:04:42.129 CC lib/json/json_util.o 00:04:42.129 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:42.129 CC lib/json/json_write.o 00:04:42.129 CC lib/idxd/idxd.o 00:04:42.129 CC lib/vmd/vmd.o 00:04:42.129 CC lib/rdma_utils/rdma_utils.o 00:04:42.129 CC lib/env_dpdk/memory.o 00:04:42.129 LIB libspdk_rdma_provider.a 00:04:42.129 LIB libspdk_conf.a 00:04:42.129 SO libspdk_rdma_provider.so.6.0 00:04:42.129 CC lib/vmd/led.o 00:04:42.129 SO libspdk_conf.so.6.0 00:04:42.129 CC lib/env_dpdk/pci.o 00:04:42.129 LIB libspdk_rdma_utils.a 00:04:42.129 SYMLINK libspdk_rdma_provider.so 00:04:42.129 CC lib/idxd/idxd_user.o 00:04:42.129 LIB libspdk_json.a 00:04:42.129 SYMLINK libspdk_conf.so 00:04:42.129 CC lib/env_dpdk/init.o 00:04:42.129 SO libspdk_rdma_utils.so.1.0 00:04:42.129 SO libspdk_json.so.6.0 00:04:42.129 SYMLINK libspdk_rdma_utils.so 00:04:42.129 CC lib/env_dpdk/threads.o 00:04:42.129 SYMLINK libspdk_json.so 00:04:42.129 CC lib/env_dpdk/pci_ioat.o 00:04:42.129 CC lib/env_dpdk/pci_virtio.o 00:04:42.129 CC lib/env_dpdk/pci_vmd.o 00:04:42.129 CC lib/env_dpdk/pci_idxd.o 00:04:42.129 CC lib/env_dpdk/pci_event.o 00:04:42.129 CC lib/idxd/idxd_kernel.o 00:04:42.129 CC lib/env_dpdk/sigbus_handler.o 00:04:42.129 CC lib/env_dpdk/pci_dpdk.o 00:04:42.129 LIB libspdk_vmd.a 00:04:42.129 SO libspdk_vmd.so.6.0 00:04:42.129 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:42.129 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:42.129 CC lib/jsonrpc/jsonrpc_server.o 00:04:42.129 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:42.129 CC lib/jsonrpc/jsonrpc_client.o 00:04:42.129 LIB libspdk_idxd.a 00:04:42.129 SYMLINK libspdk_vmd.so 00:04:42.129 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:42.129 SO libspdk_idxd.so.12.1 00:04:42.129 SYMLINK libspdk_idxd.so 00:04:42.129 LIB libspdk_jsonrpc.a 00:04:42.129 SO libspdk_jsonrpc.so.6.0 00:04:42.129 SYMLINK libspdk_jsonrpc.so 00:04:42.129 CC lib/rpc/rpc.o 00:04:42.129 LIB libspdk_env_dpdk.a 00:04:42.129 SO libspdk_env_dpdk.so.15.0 00:04:42.129 LIB libspdk_rpc.a 00:04:42.129 SO libspdk_rpc.so.6.0 00:04:42.129 SYMLINK libspdk_env_dpdk.so 00:04:42.129 SYMLINK libspdk_rpc.so 00:04:42.129 CC lib/notify/notify.o 00:04:42.129 CC lib/notify/notify_rpc.o 00:04:42.129 CC lib/keyring/keyring_rpc.o 00:04:42.129 CC lib/keyring/keyring.o 00:04:42.129 CC lib/trace/trace.o 00:04:42.129 CC lib/trace/trace_flags.o 00:04:42.129 CC lib/trace/trace_rpc.o 00:04:42.129 LIB libspdk_notify.a 00:04:42.129 SO libspdk_notify.so.6.0 00:04:42.129 LIB libspdk_trace.a 00:04:42.130 LIB libspdk_keyring.a 00:04:42.130 SYMLINK libspdk_notify.so 00:04:42.130 SO libspdk_trace.so.11.0 00:04:42.130 SO libspdk_keyring.so.2.0 00:04:42.130 SYMLINK libspdk_trace.so 00:04:42.130 SYMLINK libspdk_keyring.so 00:04:42.130 CC lib/sock/sock.o 00:04:42.130 CC lib/thread/thread.o 00:04:42.130 CC lib/thread/iobuf.o 00:04:42.130 CC lib/sock/sock_rpc.o 00:04:42.130 LIB libspdk_sock.a 00:04:42.130 SO libspdk_sock.so.10.0 00:04:42.130 SYMLINK libspdk_sock.so 00:04:42.130 CC lib/nvme/nvme_ctrlr.o 00:04:42.130 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:42.130 CC lib/nvme/nvme_fabric.o 00:04:42.130 CC lib/nvme/nvme_ns_cmd.o 00:04:42.130 CC lib/nvme/nvme_ns.o 00:04:42.130 CC lib/nvme/nvme_pcie_common.o 00:04:42.130 CC lib/nvme/nvme.o 00:04:42.130 CC lib/nvme/nvme_pcie.o 00:04:42.130 CC lib/nvme/nvme_qpair.o 00:04:42.130 LIB libspdk_thread.a 00:04:42.130 SO libspdk_thread.so.10.1 00:04:42.130 CC lib/nvme/nvme_quirks.o 00:04:42.130 SYMLINK libspdk_thread.so 00:04:42.130 CC lib/nvme/nvme_transport.o 00:04:42.130 CC lib/nvme/nvme_discovery.o 00:04:42.130 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:42.130 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:42.130 CC lib/nvme/nvme_tcp.o 00:04:42.130 CC lib/nvme/nvme_opal.o 00:04:42.130 CC lib/nvme/nvme_io_msg.o 00:04:42.130 CC lib/nvme/nvme_poll_group.o 00:04:42.130 CC lib/nvme/nvme_zns.o 00:04:42.130 CC lib/nvme/nvme_auth.o 00:04:42.130 CC lib/nvme/nvme_stubs.o 00:04:42.130 CC lib/nvme/nvme_cuse.o 00:04:42.130 CC lib/nvme/nvme_rdma.o 00:04:42.389 CC lib/accel/accel.o 00:04:42.389 CC lib/blob/blobstore.o 00:04:42.389 CC lib/init/json_config.o 00:04:42.647 CC lib/init/subsystem.o 00:04:42.647 CC lib/init/subsystem_rpc.o 00:04:42.647 CC lib/init/rpc.o 00:04:42.647 CC lib/blob/request.o 00:04:42.647 CC lib/blob/zeroes.o 00:04:42.647 CC lib/blob/blob_bs_dev.o 00:04:42.907 LIB libspdk_init.a 00:04:42.907 SO libspdk_init.so.6.0 00:04:42.907 CC lib/accel/accel_rpc.o 00:04:42.907 SYMLINK libspdk_init.so 00:04:42.907 CC lib/accel/accel_sw.o 00:04:43.165 CC lib/event/reactor.o 00:04:43.165 CC lib/event/app.o 00:04:43.165 CC lib/event/log_rpc.o 00:04:43.165 CC lib/virtio/virtio.o 00:04:43.165 CC lib/fsdev/fsdev.o 00:04:43.165 CC lib/fsdev/fsdev_io.o 00:04:43.165 CC lib/event/app_rpc.o 00:04:43.165 CC lib/fsdev/fsdev_rpc.o 00:04:43.424 LIB libspdk_nvme.a 00:04:43.424 CC lib/virtio/virtio_vhost_user.o 00:04:43.424 CC lib/virtio/virtio_vfio_user.o 00:04:43.424 LIB libspdk_accel.a 00:04:43.424 SO libspdk_accel.so.16.0 00:04:43.424 CC lib/event/scheduler_static.o 00:04:43.424 CC lib/virtio/virtio_pci.o 00:04:43.424 SO libspdk_nvme.so.14.0 00:04:43.424 SYMLINK libspdk_accel.so 00:04:43.682 LIB libspdk_event.a 00:04:43.682 SO libspdk_event.so.14.0 00:04:43.682 CC lib/bdev/bdev.o 00:04:43.682 CC lib/bdev/bdev_zone.o 00:04:43.682 CC lib/bdev/bdev_rpc.o 00:04:43.682 CC lib/bdev/part.o 00:04:43.682 CC lib/bdev/scsi_nvme.o 00:04:43.682 LIB libspdk_fsdev.a 00:04:43.682 SYMLINK libspdk_nvme.so 00:04:43.682 SYMLINK libspdk_event.so 00:04:43.682 SO libspdk_fsdev.so.1.0 00:04:43.940 LIB libspdk_virtio.a 00:04:43.940 SYMLINK libspdk_fsdev.so 00:04:43.940 SO libspdk_virtio.so.7.0 00:04:43.940 SYMLINK libspdk_virtio.so 00:04:43.940 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:44.506 LIB libspdk_fuse_dispatcher.a 00:04:44.765 SO libspdk_fuse_dispatcher.so.1.0 00:04:44.765 SYMLINK libspdk_fuse_dispatcher.so 00:04:45.352 LIB libspdk_blob.a 00:04:45.352 SO libspdk_blob.so.11.0 00:04:45.352 SYMLINK libspdk_blob.so 00:04:45.631 CC lib/blobfs/tree.o 00:04:45.631 CC lib/blobfs/blobfs.o 00:04:45.631 CC lib/lvol/lvol.o 00:04:46.248 LIB libspdk_bdev.a 00:04:46.248 SO libspdk_bdev.so.16.0 00:04:46.248 SYMLINK libspdk_bdev.so 00:04:46.507 LIB libspdk_lvol.a 00:04:46.507 LIB libspdk_blobfs.a 00:04:46.507 SO libspdk_lvol.so.10.0 00:04:46.507 SO libspdk_blobfs.so.10.0 00:04:46.507 CC lib/ublk/ublk.o 00:04:46.507 CC lib/ublk/ublk_rpc.o 00:04:46.507 SYMLINK libspdk_blobfs.so 00:04:46.507 SYMLINK libspdk_lvol.so 00:04:46.507 CC lib/nvmf/ctrlr.o 00:04:46.507 CC lib/nvmf/ctrlr_discovery.o 00:04:46.507 CC lib/nvmf/ctrlr_bdev.o 00:04:46.507 CC lib/nvmf/subsystem.o 00:04:46.507 CC lib/nvmf/nvmf.o 00:04:46.507 CC lib/nbd/nbd.o 00:04:46.507 CC lib/scsi/dev.o 00:04:46.507 CC lib/ftl/ftl_core.o 00:04:46.766 CC lib/nvmf/nvmf_rpc.o 00:04:46.766 CC lib/scsi/lun.o 00:04:47.025 CC lib/ftl/ftl_init.o 00:04:47.025 CC lib/nbd/nbd_rpc.o 00:04:47.025 CC lib/nvmf/transport.o 00:04:47.025 CC lib/scsi/port.o 00:04:47.284 LIB libspdk_ublk.a 00:04:47.284 LIB libspdk_nbd.a 00:04:47.284 CC lib/ftl/ftl_layout.o 00:04:47.284 SO libspdk_ublk.so.3.0 00:04:47.284 SO libspdk_nbd.so.7.0 00:04:47.284 CC lib/scsi/scsi.o 00:04:47.284 SYMLINK libspdk_nbd.so 00:04:47.284 CC lib/scsi/scsi_bdev.o 00:04:47.284 SYMLINK libspdk_ublk.so 00:04:47.284 CC lib/scsi/scsi_pr.o 00:04:47.284 CC lib/ftl/ftl_debug.o 00:04:47.284 CC lib/scsi/scsi_rpc.o 00:04:47.284 CC lib/nvmf/tcp.o 00:04:47.543 CC lib/nvmf/stubs.o 00:04:47.543 CC lib/scsi/task.o 00:04:47.543 CC lib/ftl/ftl_io.o 00:04:47.543 CC lib/ftl/ftl_sb.o 00:04:47.543 CC lib/ftl/ftl_l2p.o 00:04:47.800 CC lib/nvmf/mdns_server.o 00:04:47.800 CC lib/nvmf/rdma.o 00:04:47.800 LIB libspdk_scsi.a 00:04:47.800 CC lib/ftl/ftl_l2p_flat.o 00:04:47.801 CC lib/nvmf/auth.o 00:04:47.801 CC lib/ftl/ftl_nv_cache.o 00:04:47.801 SO libspdk_scsi.so.9.0 00:04:47.801 CC lib/ftl/ftl_band.o 00:04:47.801 SYMLINK libspdk_scsi.so 00:04:47.801 CC lib/ftl/ftl_band_ops.o 00:04:48.058 CC lib/iscsi/conn.o 00:04:48.058 CC lib/vhost/vhost.o 00:04:48.058 CC lib/vhost/vhost_rpc.o 00:04:48.316 CC lib/vhost/vhost_scsi.o 00:04:48.316 CC lib/vhost/vhost_blk.o 00:04:48.316 CC lib/vhost/rte_vhost_user.o 00:04:48.576 CC lib/iscsi/init_grp.o 00:04:48.834 CC lib/iscsi/iscsi.o 00:04:48.834 CC lib/ftl/ftl_writer.o 00:04:48.834 CC lib/iscsi/param.o 00:04:48.834 CC lib/ftl/ftl_rq.o 00:04:48.834 CC lib/ftl/ftl_reloc.o 00:04:49.092 CC lib/ftl/ftl_l2p_cache.o 00:04:49.092 CC lib/ftl/ftl_p2l.o 00:04:49.092 CC lib/ftl/ftl_p2l_log.o 00:04:49.092 CC lib/ftl/mngt/ftl_mngt.o 00:04:49.092 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:49.092 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:49.351 LIB libspdk_vhost.a 00:04:49.351 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:49.351 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:49.351 SO libspdk_vhost.so.8.0 00:04:49.351 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:49.351 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:49.610 SYMLINK libspdk_vhost.so 00:04:49.610 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:49.610 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:49.610 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:49.610 CC lib/iscsi/portal_grp.o 00:04:49.610 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:49.610 CC lib/iscsi/tgt_node.o 00:04:49.610 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:49.610 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:49.610 CC lib/ftl/utils/ftl_conf.o 00:04:49.610 CC lib/iscsi/iscsi_subsystem.o 00:04:49.610 LIB libspdk_nvmf.a 00:04:49.869 CC lib/iscsi/iscsi_rpc.o 00:04:49.869 CC lib/iscsi/task.o 00:04:49.869 CC lib/ftl/utils/ftl_md.o 00:04:49.869 SO libspdk_nvmf.so.19.0 00:04:49.869 CC lib/ftl/utils/ftl_mempool.o 00:04:49.869 CC lib/ftl/utils/ftl_bitmap.o 00:04:50.128 SYMLINK libspdk_nvmf.so 00:04:50.128 CC lib/ftl/utils/ftl_property.o 00:04:50.128 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:50.128 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:50.128 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:50.128 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:50.128 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:50.128 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:50.128 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:50.128 LIB libspdk_iscsi.a 00:04:50.388 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:50.388 SO libspdk_iscsi.so.8.0 00:04:50.388 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:50.388 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:50.388 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:50.388 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:50.388 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:50.388 CC lib/ftl/base/ftl_base_dev.o 00:04:50.388 CC lib/ftl/base/ftl_base_bdev.o 00:04:50.388 CC lib/ftl/ftl_trace.o 00:04:50.388 SYMLINK libspdk_iscsi.so 00:04:50.647 LIB libspdk_ftl.a 00:04:50.906 SO libspdk_ftl.so.9.0 00:04:51.165 SYMLINK libspdk_ftl.so 00:04:51.424 CC module/env_dpdk/env_dpdk_rpc.o 00:04:51.424 CC module/sock/posix/posix.o 00:04:51.424 CC module/keyring/linux/keyring.o 00:04:51.424 CC module/blob/bdev/blob_bdev.o 00:04:51.424 CC module/accel/error/accel_error.o 00:04:51.424 CC module/fsdev/aio/fsdev_aio.o 00:04:51.424 CC module/sock/uring/uring.o 00:04:51.424 CC module/accel/ioat/accel_ioat.o 00:04:51.424 CC module/keyring/file/keyring.o 00:04:51.424 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:51.683 LIB libspdk_env_dpdk_rpc.a 00:04:51.683 SO libspdk_env_dpdk_rpc.so.6.0 00:04:51.683 SYMLINK libspdk_env_dpdk_rpc.so 00:04:51.683 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:51.683 CC module/keyring/linux/keyring_rpc.o 00:04:51.683 CC module/keyring/file/keyring_rpc.o 00:04:51.683 CC module/accel/error/accel_error_rpc.o 00:04:51.683 CC module/accel/ioat/accel_ioat_rpc.o 00:04:51.683 LIB libspdk_scheduler_dynamic.a 00:04:51.683 SO libspdk_scheduler_dynamic.so.4.0 00:04:51.683 LIB libspdk_blob_bdev.a 00:04:51.943 LIB libspdk_keyring_linux.a 00:04:51.943 LIB libspdk_keyring_file.a 00:04:51.943 SO libspdk_blob_bdev.so.11.0 00:04:51.943 CC module/fsdev/aio/linux_aio_mgr.o 00:04:51.943 SYMLINK libspdk_scheduler_dynamic.so 00:04:51.943 SO libspdk_keyring_linux.so.1.0 00:04:51.943 SO libspdk_keyring_file.so.2.0 00:04:51.943 LIB libspdk_accel_error.a 00:04:51.943 LIB libspdk_accel_ioat.a 00:04:51.943 SYMLINK libspdk_blob_bdev.so 00:04:51.943 SO libspdk_accel_error.so.2.0 00:04:51.943 SYMLINK libspdk_keyring_file.so 00:04:51.943 SO libspdk_accel_ioat.so.6.0 00:04:51.943 SYMLINK libspdk_keyring_linux.so 00:04:51.943 SYMLINK libspdk_accel_error.so 00:04:51.943 SYMLINK libspdk_accel_ioat.so 00:04:51.943 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:52.202 CC module/accel/dsa/accel_dsa.o 00:04:52.202 CC module/scheduler/gscheduler/gscheduler.o 00:04:52.202 CC module/accel/iaa/accel_iaa.o 00:04:52.202 LIB libspdk_scheduler_dpdk_governor.a 00:04:52.202 LIB libspdk_fsdev_aio.a 00:04:52.202 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:52.202 LIB libspdk_sock_uring.a 00:04:52.202 CC module/bdev/error/vbdev_error.o 00:04:52.202 CC module/bdev/delay/vbdev_delay.o 00:04:52.202 SO libspdk_fsdev_aio.so.1.0 00:04:52.202 SO libspdk_sock_uring.so.5.0 00:04:52.202 LIB libspdk_sock_posix.a 00:04:52.202 CC module/blobfs/bdev/blobfs_bdev.o 00:04:52.202 LIB libspdk_scheduler_gscheduler.a 00:04:52.202 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:52.202 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:52.202 SO libspdk_sock_posix.so.6.0 00:04:52.202 SYMLINK libspdk_fsdev_aio.so 00:04:52.202 SYMLINK libspdk_sock_uring.so 00:04:52.202 SO libspdk_scheduler_gscheduler.so.4.0 00:04:52.202 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:52.202 CC module/accel/iaa/accel_iaa_rpc.o 00:04:52.202 CC module/bdev/error/vbdev_error_rpc.o 00:04:52.462 SYMLINK libspdk_scheduler_gscheduler.so 00:04:52.462 SYMLINK libspdk_sock_posix.so 00:04:52.462 CC module/accel/dsa/accel_dsa_rpc.o 00:04:52.462 LIB libspdk_accel_iaa.a 00:04:52.462 LIB libspdk_blobfs_bdev.a 00:04:52.462 SO libspdk_blobfs_bdev.so.6.0 00:04:52.462 SO libspdk_accel_iaa.so.3.0 00:04:52.462 CC module/bdev/gpt/gpt.o 00:04:52.462 CC module/bdev/lvol/vbdev_lvol.o 00:04:52.462 CC module/bdev/gpt/vbdev_gpt.o 00:04:52.462 SYMLINK libspdk_accel_iaa.so 00:04:52.462 SYMLINK libspdk_blobfs_bdev.so 00:04:52.462 LIB libspdk_bdev_error.a 00:04:52.462 LIB libspdk_accel_dsa.a 00:04:52.721 SO libspdk_bdev_error.so.6.0 00:04:52.721 SO libspdk_accel_dsa.so.5.0 00:04:52.721 CC module/bdev/malloc/bdev_malloc.o 00:04:52.721 CC module/bdev/null/bdev_null.o 00:04:52.721 SYMLINK libspdk_accel_dsa.so 00:04:52.721 SYMLINK libspdk_bdev_error.so 00:04:52.721 LIB libspdk_bdev_delay.a 00:04:52.721 SO libspdk_bdev_delay.so.6.0 00:04:52.721 CC module/bdev/passthru/vbdev_passthru.o 00:04:52.721 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:52.721 CC module/bdev/nvme/bdev_nvme.o 00:04:52.721 SYMLINK libspdk_bdev_delay.so 00:04:52.721 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:52.721 CC module/bdev/raid/bdev_raid.o 00:04:52.979 CC module/bdev/split/vbdev_split.o 00:04:52.979 LIB libspdk_bdev_gpt.a 00:04:52.979 SO libspdk_bdev_gpt.so.6.0 00:04:52.979 CC module/bdev/null/bdev_null_rpc.o 00:04:52.979 CC module/bdev/split/vbdev_split_rpc.o 00:04:52.979 SYMLINK libspdk_bdev_gpt.so 00:04:52.979 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:52.979 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:52.979 LIB libspdk_bdev_passthru.a 00:04:52.979 SO libspdk_bdev_passthru.so.6.0 00:04:53.238 LIB libspdk_bdev_split.a 00:04:53.238 LIB libspdk_bdev_null.a 00:04:53.238 LIB libspdk_bdev_malloc.a 00:04:53.238 SYMLINK libspdk_bdev_passthru.so 00:04:53.238 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:53.238 SO libspdk_bdev_split.so.6.0 00:04:53.238 SO libspdk_bdev_null.so.6.0 00:04:53.238 SO libspdk_bdev_malloc.so.6.0 00:04:53.238 SYMLINK libspdk_bdev_split.so 00:04:53.238 SYMLINK libspdk_bdev_null.so 00:04:53.238 CC module/bdev/raid/bdev_raid_rpc.o 00:04:53.238 SYMLINK libspdk_bdev_malloc.so 00:04:53.238 CC module/bdev/raid/bdev_raid_sb.o 00:04:53.238 CC module/bdev/raid/raid0.o 00:04:53.238 CC module/bdev/uring/bdev_uring.o 00:04:53.495 CC module/bdev/aio/bdev_aio.o 00:04:53.495 LIB libspdk_bdev_lvol.a 00:04:53.495 SO libspdk_bdev_lvol.so.6.0 00:04:53.495 CC module/bdev/raid/raid1.o 00:04:53.495 CC module/bdev/raid/concat.o 00:04:53.495 SYMLINK libspdk_bdev_lvol.so 00:04:53.495 CC module/bdev/uring/bdev_uring_rpc.o 00:04:53.495 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:53.495 CC module/bdev/aio/bdev_aio_rpc.o 00:04:53.754 CC module/bdev/ftl/bdev_ftl.o 00:04:53.754 LIB libspdk_bdev_uring.a 00:04:53.754 LIB libspdk_bdev_zone_block.a 00:04:53.754 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:53.754 CC module/bdev/nvme/nvme_rpc.o 00:04:53.754 LIB libspdk_bdev_aio.a 00:04:53.754 SO libspdk_bdev_uring.so.6.0 00:04:53.754 SO libspdk_bdev_zone_block.so.6.0 00:04:53.754 SO libspdk_bdev_aio.so.6.0 00:04:53.754 LIB libspdk_bdev_raid.a 00:04:54.012 SYMLINK libspdk_bdev_uring.so 00:04:54.012 SYMLINK libspdk_bdev_zone_block.so 00:04:54.012 CC module/bdev/nvme/bdev_mdns_client.o 00:04:54.012 CC module/bdev/nvme/vbdev_opal.o 00:04:54.012 SYMLINK libspdk_bdev_aio.so 00:04:54.012 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:54.012 SO libspdk_bdev_raid.so.6.0 00:04:54.012 CC module/bdev/iscsi/bdev_iscsi.o 00:04:54.012 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:54.012 SYMLINK libspdk_bdev_raid.so 00:04:54.012 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:54.012 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:54.012 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:54.012 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:54.012 LIB libspdk_bdev_ftl.a 00:04:54.012 SO libspdk_bdev_ftl.so.6.0 00:04:54.270 SYMLINK libspdk_bdev_ftl.so 00:04:54.270 LIB libspdk_bdev_iscsi.a 00:04:54.270 SO libspdk_bdev_iscsi.so.6.0 00:04:54.528 SYMLINK libspdk_bdev_iscsi.so 00:04:54.528 LIB libspdk_bdev_virtio.a 00:04:54.528 SO libspdk_bdev_virtio.so.6.0 00:04:54.528 SYMLINK libspdk_bdev_virtio.so 00:04:55.093 LIB libspdk_bdev_nvme.a 00:04:55.093 SO libspdk_bdev_nvme.so.7.0 00:04:55.093 SYMLINK libspdk_bdev_nvme.so 00:04:55.657 CC module/event/subsystems/vmd/vmd.o 00:04:55.657 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:55.657 CC module/event/subsystems/fsdev/fsdev.o 00:04:55.657 CC module/event/subsystems/iobuf/iobuf.o 00:04:55.657 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:55.657 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:55.657 CC module/event/subsystems/sock/sock.o 00:04:55.657 CC module/event/subsystems/scheduler/scheduler.o 00:04:55.657 CC module/event/subsystems/keyring/keyring.o 00:04:55.657 LIB libspdk_event_vhost_blk.a 00:04:55.914 LIB libspdk_event_fsdev.a 00:04:55.914 SO libspdk_event_vhost_blk.so.3.0 00:04:55.914 LIB libspdk_event_vmd.a 00:04:55.914 LIB libspdk_event_keyring.a 00:04:55.914 SO libspdk_event_fsdev.so.1.0 00:04:55.914 LIB libspdk_event_scheduler.a 00:04:55.914 LIB libspdk_event_sock.a 00:04:55.914 LIB libspdk_event_iobuf.a 00:04:55.914 SO libspdk_event_vmd.so.6.0 00:04:55.914 SO libspdk_event_keyring.so.1.0 00:04:55.914 SO libspdk_event_scheduler.so.4.0 00:04:55.914 SO libspdk_event_sock.so.5.0 00:04:55.914 SYMLINK libspdk_event_vhost_blk.so 00:04:55.914 SYMLINK libspdk_event_fsdev.so 00:04:55.914 SO libspdk_event_iobuf.so.3.0 00:04:55.914 SYMLINK libspdk_event_keyring.so 00:04:55.914 SYMLINK libspdk_event_scheduler.so 00:04:55.914 SYMLINK libspdk_event_sock.so 00:04:55.914 SYMLINK libspdk_event_vmd.so 00:04:55.914 SYMLINK libspdk_event_iobuf.so 00:04:56.170 CC module/event/subsystems/accel/accel.o 00:04:56.428 LIB libspdk_event_accel.a 00:04:56.428 SO libspdk_event_accel.so.6.0 00:04:56.428 SYMLINK libspdk_event_accel.so 00:04:56.686 CC module/event/subsystems/bdev/bdev.o 00:04:56.943 LIB libspdk_event_bdev.a 00:04:56.943 SO libspdk_event_bdev.so.6.0 00:04:56.943 SYMLINK libspdk_event_bdev.so 00:04:57.201 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:57.201 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:57.201 CC module/event/subsystems/nbd/nbd.o 00:04:57.201 CC module/event/subsystems/scsi/scsi.o 00:04:57.201 CC module/event/subsystems/ublk/ublk.o 00:04:57.472 LIB libspdk_event_nbd.a 00:04:57.472 LIB libspdk_event_ublk.a 00:04:57.472 LIB libspdk_event_scsi.a 00:04:57.472 SO libspdk_event_nbd.so.6.0 00:04:57.472 SO libspdk_event_ublk.so.3.0 00:04:57.472 SO libspdk_event_scsi.so.6.0 00:04:57.472 SYMLINK libspdk_event_nbd.so 00:04:57.472 SYMLINK libspdk_event_ublk.so 00:04:57.472 SYMLINK libspdk_event_scsi.so 00:04:57.472 LIB libspdk_event_nvmf.a 00:04:57.730 SO libspdk_event_nvmf.so.6.0 00:04:57.730 SYMLINK libspdk_event_nvmf.so 00:04:57.730 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:57.730 CC module/event/subsystems/iscsi/iscsi.o 00:04:57.988 LIB libspdk_event_vhost_scsi.a 00:04:57.988 SO libspdk_event_vhost_scsi.so.3.0 00:04:57.988 LIB libspdk_event_iscsi.a 00:04:57.988 SO libspdk_event_iscsi.so.6.0 00:04:57.988 SYMLINK libspdk_event_vhost_scsi.so 00:04:58.247 SYMLINK libspdk_event_iscsi.so 00:04:58.247 SO libspdk.so.6.0 00:04:58.247 SYMLINK libspdk.so 00:04:58.505 CC app/trace_record/trace_record.o 00:04:58.505 CC app/spdk_nvme_perf/perf.o 00:04:58.505 CC app/spdk_nvme_identify/identify.o 00:04:58.505 CXX app/trace/trace.o 00:04:58.505 CC app/spdk_lspci/spdk_lspci.o 00:04:58.505 CC app/nvmf_tgt/nvmf_main.o 00:04:58.505 CC app/iscsi_tgt/iscsi_tgt.o 00:04:58.505 CC app/spdk_tgt/spdk_tgt.o 00:04:58.764 CC test/thread/poller_perf/poller_perf.o 00:04:58.764 CC examples/util/zipf/zipf.o 00:04:58.764 LINK spdk_lspci 00:04:58.764 LINK poller_perf 00:04:58.764 LINK zipf 00:04:58.764 LINK spdk_trace_record 00:04:59.023 LINK spdk_tgt 00:04:59.023 LINK nvmf_tgt 00:04:59.023 LINK iscsi_tgt 00:04:59.023 CC app/spdk_nvme_discover/discovery_aer.o 00:04:59.023 LINK spdk_trace 00:04:59.023 CC app/spdk_top/spdk_top.o 00:04:59.281 CC examples/ioat/perf/perf.o 00:04:59.281 CC app/spdk_dd/spdk_dd.o 00:04:59.281 CC test/dma/test_dma/test_dma.o 00:04:59.281 LINK spdk_nvme_discover 00:04:59.281 CC app/fio/nvme/fio_plugin.o 00:04:59.281 CC examples/vmd/lsvmd/lsvmd.o 00:04:59.281 LINK spdk_nvme_identify 00:04:59.281 CC examples/idxd/perf/perf.o 00:04:59.540 LINK spdk_nvme_perf 00:04:59.541 LINK lsvmd 00:04:59.541 LINK ioat_perf 00:04:59.541 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:59.800 CC examples/ioat/verify/verify.o 00:04:59.800 LINK spdk_dd 00:04:59.800 CC examples/vmd/led/led.o 00:04:59.800 LINK idxd_perf 00:04:59.800 LINK interrupt_tgt 00:04:59.800 CC examples/thread/thread/thread_ex.o 00:04:59.800 LINK test_dma 00:04:59.800 CC examples/sock/hello_world/hello_sock.o 00:04:59.800 LINK spdk_nvme 00:04:59.800 LINK led 00:05:00.059 LINK verify 00:05:00.059 LINK spdk_top 00:05:00.059 TEST_HEADER include/spdk/accel.h 00:05:00.059 TEST_HEADER include/spdk/accel_module.h 00:05:00.059 TEST_HEADER include/spdk/assert.h 00:05:00.059 TEST_HEADER include/spdk/barrier.h 00:05:00.059 LINK thread 00:05:00.059 TEST_HEADER include/spdk/base64.h 00:05:00.059 TEST_HEADER include/spdk/bdev.h 00:05:00.059 CC app/vhost/vhost.o 00:05:00.059 TEST_HEADER include/spdk/bdev_module.h 00:05:00.059 TEST_HEADER include/spdk/bdev_zone.h 00:05:00.059 TEST_HEADER include/spdk/bit_array.h 00:05:00.059 TEST_HEADER include/spdk/bit_pool.h 00:05:00.059 TEST_HEADER include/spdk/blob_bdev.h 00:05:00.059 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:00.059 TEST_HEADER include/spdk/blobfs.h 00:05:00.059 TEST_HEADER include/spdk/blob.h 00:05:00.059 TEST_HEADER include/spdk/conf.h 00:05:00.059 TEST_HEADER include/spdk/config.h 00:05:00.059 TEST_HEADER include/spdk/cpuset.h 00:05:00.059 TEST_HEADER include/spdk/crc16.h 00:05:00.059 TEST_HEADER include/spdk/crc32.h 00:05:00.059 TEST_HEADER include/spdk/crc64.h 00:05:00.059 LINK hello_sock 00:05:00.059 TEST_HEADER include/spdk/dif.h 00:05:00.059 TEST_HEADER include/spdk/dma.h 00:05:00.059 TEST_HEADER include/spdk/endian.h 00:05:00.059 TEST_HEADER include/spdk/env_dpdk.h 00:05:00.059 CC app/fio/bdev/fio_plugin.o 00:05:00.059 TEST_HEADER include/spdk/env.h 00:05:00.059 TEST_HEADER include/spdk/event.h 00:05:00.059 TEST_HEADER include/spdk/fd_group.h 00:05:00.059 TEST_HEADER include/spdk/fd.h 00:05:00.059 TEST_HEADER include/spdk/file.h 00:05:00.059 TEST_HEADER include/spdk/fsdev.h 00:05:00.059 TEST_HEADER include/spdk/fsdev_module.h 00:05:00.059 TEST_HEADER include/spdk/ftl.h 00:05:00.059 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:00.059 TEST_HEADER include/spdk/gpt_spec.h 00:05:00.059 TEST_HEADER include/spdk/hexlify.h 00:05:00.059 TEST_HEADER include/spdk/histogram_data.h 00:05:00.059 TEST_HEADER include/spdk/idxd.h 00:05:00.059 TEST_HEADER include/spdk/idxd_spec.h 00:05:00.059 TEST_HEADER include/spdk/init.h 00:05:00.059 TEST_HEADER include/spdk/ioat.h 00:05:00.059 TEST_HEADER include/spdk/ioat_spec.h 00:05:00.059 TEST_HEADER include/spdk/iscsi_spec.h 00:05:00.059 TEST_HEADER include/spdk/json.h 00:05:00.059 TEST_HEADER include/spdk/jsonrpc.h 00:05:00.059 TEST_HEADER include/spdk/keyring.h 00:05:00.059 TEST_HEADER include/spdk/keyring_module.h 00:05:00.059 TEST_HEADER include/spdk/likely.h 00:05:00.059 TEST_HEADER include/spdk/log.h 00:05:00.059 TEST_HEADER include/spdk/lvol.h 00:05:00.059 TEST_HEADER include/spdk/md5.h 00:05:00.059 TEST_HEADER include/spdk/memory.h 00:05:00.059 TEST_HEADER include/spdk/mmio.h 00:05:00.059 TEST_HEADER include/spdk/nbd.h 00:05:00.059 TEST_HEADER include/spdk/net.h 00:05:00.059 TEST_HEADER include/spdk/notify.h 00:05:00.059 TEST_HEADER include/spdk/nvme.h 00:05:00.059 TEST_HEADER include/spdk/nvme_intel.h 00:05:00.059 CC test/app/bdev_svc/bdev_svc.o 00:05:00.059 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:00.059 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:00.059 TEST_HEADER include/spdk/nvme_spec.h 00:05:00.059 TEST_HEADER include/spdk/nvme_zns.h 00:05:00.059 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:00.059 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:00.059 TEST_HEADER include/spdk/nvmf.h 00:05:00.059 TEST_HEADER include/spdk/nvmf_spec.h 00:05:00.059 TEST_HEADER include/spdk/nvmf_transport.h 00:05:00.059 TEST_HEADER include/spdk/opal.h 00:05:00.059 TEST_HEADER include/spdk/opal_spec.h 00:05:00.059 TEST_HEADER include/spdk/pci_ids.h 00:05:00.059 TEST_HEADER include/spdk/pipe.h 00:05:00.059 TEST_HEADER include/spdk/queue.h 00:05:00.059 CC test/env/vtophys/vtophys.o 00:05:00.059 TEST_HEADER include/spdk/reduce.h 00:05:00.059 TEST_HEADER include/spdk/rpc.h 00:05:00.059 TEST_HEADER include/spdk/scheduler.h 00:05:00.059 TEST_HEADER include/spdk/scsi.h 00:05:00.059 TEST_HEADER include/spdk/scsi_spec.h 00:05:00.059 TEST_HEADER include/spdk/sock.h 00:05:00.059 TEST_HEADER include/spdk/stdinc.h 00:05:00.059 TEST_HEADER include/spdk/string.h 00:05:00.319 TEST_HEADER include/spdk/thread.h 00:05:00.319 TEST_HEADER include/spdk/trace.h 00:05:00.319 TEST_HEADER include/spdk/trace_parser.h 00:05:00.319 TEST_HEADER include/spdk/tree.h 00:05:00.319 TEST_HEADER include/spdk/ublk.h 00:05:00.319 TEST_HEADER include/spdk/util.h 00:05:00.319 TEST_HEADER include/spdk/uuid.h 00:05:00.319 TEST_HEADER include/spdk/version.h 00:05:00.319 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:00.319 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:00.319 TEST_HEADER include/spdk/vhost.h 00:05:00.319 TEST_HEADER include/spdk/vmd.h 00:05:00.319 TEST_HEADER include/spdk/xor.h 00:05:00.319 TEST_HEADER include/spdk/zipf.h 00:05:00.319 CXX test/cpp_headers/accel.o 00:05:00.319 CC test/env/mem_callbacks/mem_callbacks.o 00:05:00.319 CC test/event/event_perf/event_perf.o 00:05:00.319 CC test/event/reactor/reactor.o 00:05:00.319 LINK vhost 00:05:00.319 CC test/event/reactor_perf/reactor_perf.o 00:05:00.319 LINK bdev_svc 00:05:00.319 LINK vtophys 00:05:00.319 LINK reactor 00:05:00.319 LINK event_perf 00:05:00.319 CXX test/cpp_headers/accel_module.o 00:05:00.319 CC examples/nvme/hello_world/hello_world.o 00:05:00.319 LINK mem_callbacks 00:05:00.578 CXX test/cpp_headers/assert.o 00:05:00.578 LINK reactor_perf 00:05:00.578 CXX test/cpp_headers/barrier.o 00:05:00.578 LINK spdk_bdev 00:05:00.578 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:00.578 CXX test/cpp_headers/base64.o 00:05:00.578 LINK hello_world 00:05:00.837 CC examples/accel/perf/accel_perf.o 00:05:00.837 CC test/event/app_repeat/app_repeat.o 00:05:00.837 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:00.837 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:00.837 CC test/rpc_client/rpc_client_test.o 00:05:00.837 CC test/nvme/aer/aer.o 00:05:00.837 LINK env_dpdk_post_init 00:05:00.837 CC examples/blob/hello_world/hello_blob.o 00:05:00.837 CXX test/cpp_headers/bdev.o 00:05:00.837 LINK app_repeat 00:05:00.837 CC examples/nvme/reconnect/reconnect.o 00:05:01.097 LINK rpc_client_test 00:05:01.097 LINK hello_fsdev 00:05:01.097 CXX test/cpp_headers/bdev_module.o 00:05:01.097 CC test/env/memory/memory_ut.o 00:05:01.097 LINK hello_blob 00:05:01.097 LINK aer 00:05:01.097 CXX test/cpp_headers/bdev_zone.o 00:05:01.097 LINK nvme_fuzz 00:05:01.097 LINK accel_perf 00:05:01.097 CC test/event/scheduler/scheduler.o 00:05:01.097 CXX test/cpp_headers/bit_array.o 00:05:01.356 LINK reconnect 00:05:01.356 CXX test/cpp_headers/bit_pool.o 00:05:01.356 CC test/nvme/reset/reset.o 00:05:01.356 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:01.356 CC examples/blob/cli/blobcli.o 00:05:01.356 CC test/app/histogram_perf/histogram_perf.o 00:05:01.356 LINK scheduler 00:05:01.356 CC test/nvme/sgl/sgl.o 00:05:01.356 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:01.614 CXX test/cpp_headers/blob_bdev.o 00:05:01.614 CC test/env/pci/pci_ut.o 00:05:01.614 LINK histogram_perf 00:05:01.614 CXX test/cpp_headers/blobfs_bdev.o 00:05:01.614 LINK reset 00:05:01.614 CXX test/cpp_headers/blobfs.o 00:05:01.614 LINK sgl 00:05:01.873 LINK memory_ut 00:05:01.873 CC examples/nvme/arbitration/arbitration.o 00:05:01.873 CC examples/nvme/hotplug/hotplug.o 00:05:01.873 LINK blobcli 00:05:01.873 CXX test/cpp_headers/blob.o 00:05:01.873 LINK pci_ut 00:05:01.873 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:01.873 LINK nvme_manage 00:05:02.136 CC test/nvme/e2edp/nvme_dp.o 00:05:02.136 CXX test/cpp_headers/conf.o 00:05:02.136 CXX test/cpp_headers/config.o 00:05:02.136 CXX test/cpp_headers/cpuset.o 00:05:02.136 LINK hotplug 00:05:02.136 LINK cmb_copy 00:05:02.136 CC test/accel/dif/dif.o 00:05:02.136 CC examples/nvme/abort/abort.o 00:05:02.136 LINK arbitration 00:05:02.136 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:02.136 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:02.395 LINK nvme_dp 00:05:02.395 CXX test/cpp_headers/crc16.o 00:05:02.395 CXX test/cpp_headers/crc32.o 00:05:02.395 CXX test/cpp_headers/crc64.o 00:05:02.395 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:02.654 CC test/nvme/overhead/overhead.o 00:05:02.654 CC examples/bdev/hello_world/hello_bdev.o 00:05:02.654 CXX test/cpp_headers/dif.o 00:05:02.654 CXX test/cpp_headers/dma.o 00:05:02.654 LINK abort 00:05:02.654 CC examples/bdev/bdevperf/bdevperf.o 00:05:02.654 LINK pmr_persistence 00:05:02.654 LINK vhost_fuzz 00:05:02.654 CXX test/cpp_headers/endian.o 00:05:02.654 CXX test/cpp_headers/env_dpdk.o 00:05:02.913 LINK hello_bdev 00:05:02.913 LINK dif 00:05:02.913 LINK overhead 00:05:02.913 CC test/nvme/err_injection/err_injection.o 00:05:02.913 CC test/app/jsoncat/jsoncat.o 00:05:02.913 CXX test/cpp_headers/env.o 00:05:02.913 CC test/blobfs/mkfs/mkfs.o 00:05:02.913 CXX test/cpp_headers/event.o 00:05:02.913 CC test/app/stub/stub.o 00:05:02.913 LINK iscsi_fuzz 00:05:02.913 CXX test/cpp_headers/fd_group.o 00:05:03.172 LINK err_injection 00:05:03.172 LINK jsoncat 00:05:03.172 LINK mkfs 00:05:03.172 CXX test/cpp_headers/fd.o 00:05:03.172 CXX test/cpp_headers/file.o 00:05:03.172 CXX test/cpp_headers/fsdev.o 00:05:03.172 LINK stub 00:05:03.172 CXX test/cpp_headers/fsdev_module.o 00:05:03.172 CC test/nvme/startup/startup.o 00:05:03.172 CC test/lvol/esnap/esnap.o 00:05:03.431 CXX test/cpp_headers/ftl.o 00:05:03.431 CC test/bdev/bdevio/bdevio.o 00:05:03.431 CXX test/cpp_headers/fuse_dispatcher.o 00:05:03.431 CXX test/cpp_headers/gpt_spec.o 00:05:03.431 LINK bdevperf 00:05:03.431 LINK startup 00:05:03.431 CC test/nvme/reserve/reserve.o 00:05:03.431 CC test/nvme/connect_stress/connect_stress.o 00:05:03.431 CC test/nvme/simple_copy/simple_copy.o 00:05:03.690 CXX test/cpp_headers/hexlify.o 00:05:03.690 LINK connect_stress 00:05:03.690 CC test/nvme/boot_partition/boot_partition.o 00:05:03.690 CC test/nvme/compliance/nvme_compliance.o 00:05:03.690 LINK reserve 00:05:03.690 CC test/nvme/fused_ordering/fused_ordering.o 00:05:03.690 LINK simple_copy 00:05:03.690 CXX test/cpp_headers/histogram_data.o 00:05:03.690 LINK bdevio 00:05:03.690 CC examples/nvmf/nvmf/nvmf.o 00:05:03.949 LINK boot_partition 00:05:03.949 CXX test/cpp_headers/idxd.o 00:05:03.949 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:03.949 CC test/nvme/fdp/fdp.o 00:05:03.949 LINK fused_ordering 00:05:03.949 CC test/nvme/cuse/cuse.o 00:05:03.949 CXX test/cpp_headers/idxd_spec.o 00:05:03.949 LINK nvme_compliance 00:05:03.949 CXX test/cpp_headers/init.o 00:05:03.949 CXX test/cpp_headers/ioat.o 00:05:03.949 CXX test/cpp_headers/ioat_spec.o 00:05:04.208 LINK doorbell_aers 00:05:04.208 LINK nvmf 00:05:04.208 CXX test/cpp_headers/iscsi_spec.o 00:05:04.208 CXX test/cpp_headers/json.o 00:05:04.208 CXX test/cpp_headers/jsonrpc.o 00:05:04.208 CXX test/cpp_headers/keyring.o 00:05:04.208 CXX test/cpp_headers/keyring_module.o 00:05:04.208 CXX test/cpp_headers/likely.o 00:05:04.208 LINK fdp 00:05:04.208 CXX test/cpp_headers/log.o 00:05:04.208 CXX test/cpp_headers/lvol.o 00:05:04.208 CXX test/cpp_headers/md5.o 00:05:04.208 CXX test/cpp_headers/memory.o 00:05:04.467 CXX test/cpp_headers/mmio.o 00:05:04.467 CXX test/cpp_headers/nbd.o 00:05:04.467 CXX test/cpp_headers/net.o 00:05:04.467 CXX test/cpp_headers/notify.o 00:05:04.467 CXX test/cpp_headers/nvme.o 00:05:04.467 CXX test/cpp_headers/nvme_intel.o 00:05:04.467 CXX test/cpp_headers/nvme_ocssd.o 00:05:04.467 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:04.467 CXX test/cpp_headers/nvme_spec.o 00:05:04.467 CXX test/cpp_headers/nvme_zns.o 00:05:04.467 CXX test/cpp_headers/nvmf_cmd.o 00:05:04.467 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:04.726 CXX test/cpp_headers/nvmf.o 00:05:04.726 CXX test/cpp_headers/nvmf_spec.o 00:05:04.726 CXX test/cpp_headers/nvmf_transport.o 00:05:04.726 CXX test/cpp_headers/opal.o 00:05:04.726 CXX test/cpp_headers/opal_spec.o 00:05:04.726 CXX test/cpp_headers/pci_ids.o 00:05:04.726 CXX test/cpp_headers/pipe.o 00:05:04.726 CXX test/cpp_headers/queue.o 00:05:04.726 CXX test/cpp_headers/reduce.o 00:05:04.726 CXX test/cpp_headers/rpc.o 00:05:04.726 CXX test/cpp_headers/scheduler.o 00:05:04.985 CXX test/cpp_headers/scsi.o 00:05:04.985 CXX test/cpp_headers/scsi_spec.o 00:05:04.985 CXX test/cpp_headers/sock.o 00:05:04.985 CXX test/cpp_headers/stdinc.o 00:05:04.985 CXX test/cpp_headers/string.o 00:05:04.985 CXX test/cpp_headers/thread.o 00:05:04.985 CXX test/cpp_headers/trace.o 00:05:04.985 CXX test/cpp_headers/trace_parser.o 00:05:04.985 CXX test/cpp_headers/tree.o 00:05:04.985 CXX test/cpp_headers/ublk.o 00:05:04.985 CXX test/cpp_headers/util.o 00:05:04.985 CXX test/cpp_headers/uuid.o 00:05:04.985 LINK cuse 00:05:04.985 CXX test/cpp_headers/version.o 00:05:05.245 CXX test/cpp_headers/vfio_user_pci.o 00:05:05.245 CXX test/cpp_headers/vfio_user_spec.o 00:05:05.245 CXX test/cpp_headers/vhost.o 00:05:05.245 CXX test/cpp_headers/vmd.o 00:05:05.245 CXX test/cpp_headers/xor.o 00:05:05.245 CXX test/cpp_headers/zipf.o 00:05:07.780 LINK esnap 00:05:08.349 00:05:08.349 real 1m22.894s 00:05:08.349 user 6m28.460s 00:05:08.349 sys 1m12.182s 00:05:08.349 18:22:25 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:08.349 18:22:25 make -- common/autotest_common.sh@10 -- $ set +x 00:05:08.349 ************************************ 00:05:08.349 END TEST make 00:05:08.349 ************************************ 00:05:08.349 18:22:26 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:08.349 18:22:26 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:08.349 18:22:26 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:08.349 18:22:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:08.349 18:22:26 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:08.349 18:22:26 -- pm/common@44 -- $ pid=5973 00:05:08.349 18:22:26 -- pm/common@50 -- $ kill -TERM 5973 00:05:08.349 18:22:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:08.349 18:22:26 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:08.349 18:22:26 -- pm/common@44 -- $ pid=5975 00:05:08.349 18:22:26 -- pm/common@50 -- $ kill -TERM 5975 00:05:08.349 18:22:26 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:08.349 18:22:26 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:08.349 18:22:26 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:08.349 18:22:26 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:08.349 18:22:26 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.349 18:22:26 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.349 18:22:26 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.349 18:22:26 -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.349 18:22:26 -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.349 18:22:26 -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.349 18:22:26 -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.349 18:22:26 -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.349 18:22:26 -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.349 18:22:26 -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.349 18:22:26 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.349 18:22:26 -- scripts/common.sh@344 -- # case "$op" in 00:05:08.349 18:22:26 -- scripts/common.sh@345 -- # : 1 00:05:08.349 18:22:26 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.349 18:22:26 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.349 18:22:26 -- scripts/common.sh@365 -- # decimal 1 00:05:08.349 18:22:26 -- scripts/common.sh@353 -- # local d=1 00:05:08.349 18:22:26 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.349 18:22:26 -- scripts/common.sh@355 -- # echo 1 00:05:08.349 18:22:26 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.349 18:22:26 -- scripts/common.sh@366 -- # decimal 2 00:05:08.349 18:22:26 -- scripts/common.sh@353 -- # local d=2 00:05:08.349 18:22:26 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.349 18:22:26 -- scripts/common.sh@355 -- # echo 2 00:05:08.349 18:22:26 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.349 18:22:26 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.349 18:22:26 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.349 18:22:26 -- scripts/common.sh@368 -- # return 0 00:05:08.349 18:22:26 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.349 18:22:26 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:08.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.349 --rc genhtml_branch_coverage=1 00:05:08.349 --rc genhtml_function_coverage=1 00:05:08.349 --rc genhtml_legend=1 00:05:08.349 --rc geninfo_all_blocks=1 00:05:08.349 --rc geninfo_unexecuted_blocks=1 00:05:08.349 00:05:08.349 ' 00:05:08.349 18:22:26 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:08.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.349 --rc genhtml_branch_coverage=1 00:05:08.349 --rc genhtml_function_coverage=1 00:05:08.349 --rc genhtml_legend=1 00:05:08.349 --rc geninfo_all_blocks=1 00:05:08.349 --rc geninfo_unexecuted_blocks=1 00:05:08.349 00:05:08.349 ' 00:05:08.349 18:22:26 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:08.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.349 --rc genhtml_branch_coverage=1 00:05:08.349 --rc genhtml_function_coverage=1 00:05:08.349 --rc genhtml_legend=1 00:05:08.349 --rc geninfo_all_blocks=1 00:05:08.349 --rc geninfo_unexecuted_blocks=1 00:05:08.349 00:05:08.349 ' 00:05:08.349 18:22:26 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:08.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.349 --rc genhtml_branch_coverage=1 00:05:08.349 --rc genhtml_function_coverage=1 00:05:08.349 --rc genhtml_legend=1 00:05:08.349 --rc geninfo_all_blocks=1 00:05:08.349 --rc geninfo_unexecuted_blocks=1 00:05:08.349 00:05:08.349 ' 00:05:08.349 18:22:26 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:08.349 18:22:26 -- nvmf/common.sh@7 -- # uname -s 00:05:08.349 18:22:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:08.349 18:22:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:08.349 18:22:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:08.349 18:22:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:08.349 18:22:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:08.349 18:22:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:08.349 18:22:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:08.349 18:22:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:08.349 18:22:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:08.349 18:22:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:08.349 18:22:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:05:08.349 18:22:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:05:08.349 18:22:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:08.349 18:22:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:08.349 18:22:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:08.349 18:22:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:08.349 18:22:26 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:08.349 18:22:26 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:08.349 18:22:26 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:08.349 18:22:26 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:08.349 18:22:26 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:08.349 18:22:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.349 18:22:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.349 18:22:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.349 18:22:26 -- paths/export.sh@5 -- # export PATH 00:05:08.349 18:22:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.349 18:22:26 -- nvmf/common.sh@51 -- # : 0 00:05:08.349 18:22:26 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:08.349 18:22:26 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:08.349 18:22:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:08.349 18:22:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:08.349 18:22:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:08.349 18:22:26 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:08.349 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:08.349 18:22:26 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:08.349 18:22:26 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:08.349 18:22:26 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:08.349 18:22:26 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:08.349 18:22:26 -- spdk/autotest.sh@32 -- # uname -s 00:05:08.349 18:22:26 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:08.349 18:22:26 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:08.349 18:22:26 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:08.349 18:22:26 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:08.349 18:22:26 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:08.349 18:22:26 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:08.608 18:22:26 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:08.608 18:22:26 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:08.608 18:22:26 -- spdk/autotest.sh@48 -- # udevadm_pid=66518 00:05:08.608 18:22:26 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:08.608 18:22:26 -- pm/common@17 -- # local monitor 00:05:08.608 18:22:26 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:08.608 18:22:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:08.608 18:22:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:08.608 18:22:26 -- pm/common@25 -- # sleep 1 00:05:08.608 18:22:26 -- pm/common@21 -- # date +%s 00:05:08.608 18:22:26 -- pm/common@21 -- # date +%s 00:05:08.608 18:22:26 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733682146 00:05:08.608 18:22:26 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733682146 00:05:08.608 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733682146_collect-vmstat.pm.log 00:05:08.608 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733682146_collect-cpu-load.pm.log 00:05:09.540 18:22:27 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:09.540 18:22:27 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:09.540 18:22:27 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:09.540 18:22:27 -- common/autotest_common.sh@10 -- # set +x 00:05:09.540 18:22:27 -- spdk/autotest.sh@59 -- # create_test_list 00:05:09.540 18:22:27 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:09.540 18:22:27 -- common/autotest_common.sh@10 -- # set +x 00:05:09.540 18:22:27 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:09.540 18:22:27 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:09.540 18:22:27 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:09.540 18:22:27 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:09.540 18:22:27 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:09.540 18:22:27 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:09.540 18:22:27 -- common/autotest_common.sh@1455 -- # uname 00:05:09.540 18:22:27 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:09.540 18:22:27 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:09.540 18:22:27 -- common/autotest_common.sh@1475 -- # uname 00:05:09.540 18:22:27 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:09.540 18:22:27 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:09.540 18:22:27 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:09.796 lcov: LCOV version 1.15 00:05:09.796 18:22:27 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:24.692 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:24.692 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:36.893 18:22:54 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:36.893 18:22:54 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:36.893 18:22:54 -- common/autotest_common.sh@10 -- # set +x 00:05:36.893 18:22:54 -- spdk/autotest.sh@78 -- # rm -f 00:05:36.893 18:22:54 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:37.458 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:37.458 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:37.458 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:37.458 18:22:55 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:37.458 18:22:55 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:37.458 18:22:55 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:37.458 18:22:55 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:37.458 18:22:55 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:37.458 18:22:55 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:37.458 18:22:55 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:37.458 18:22:55 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:37.458 18:22:55 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:37.458 18:22:55 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:37.458 18:22:55 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:37.458 18:22:55 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:37.458 18:22:55 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:37.458 18:22:55 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:37.458 18:22:55 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:37.458 18:22:55 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:37.458 18:22:55 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:37.458 18:22:55 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:37.458 18:22:55 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:37.458 18:22:55 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:37.458 18:22:55 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:37.458 18:22:55 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:37.458 18:22:55 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:37.458 18:22:55 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:37.458 18:22:55 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:37.458 18:22:55 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:37.458 18:22:55 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:37.458 18:22:55 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:37.458 18:22:55 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:37.458 18:22:55 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:37.458 No valid GPT data, bailing 00:05:37.458 18:22:55 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:37.458 18:22:55 -- scripts/common.sh@394 -- # pt= 00:05:37.458 18:22:55 -- scripts/common.sh@395 -- # return 1 00:05:37.458 18:22:55 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:37.458 1+0 records in 00:05:37.458 1+0 records out 00:05:37.458 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00477528 s, 220 MB/s 00:05:37.458 18:22:55 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:37.458 18:22:55 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:37.458 18:22:55 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:37.458 18:22:55 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:37.458 18:22:55 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:37.458 No valid GPT data, bailing 00:05:37.458 18:22:55 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:37.458 18:22:55 -- scripts/common.sh@394 -- # pt= 00:05:37.458 18:22:55 -- scripts/common.sh@395 -- # return 1 00:05:37.458 18:22:55 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:37.458 1+0 records in 00:05:37.458 1+0 records out 00:05:37.458 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00446921 s, 235 MB/s 00:05:37.458 18:22:55 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:37.458 18:22:55 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:37.458 18:22:55 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:37.458 18:22:55 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:37.458 18:22:55 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:37.715 No valid GPT data, bailing 00:05:37.715 18:22:55 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:37.715 18:22:55 -- scripts/common.sh@394 -- # pt= 00:05:37.715 18:22:55 -- scripts/common.sh@395 -- # return 1 00:05:37.715 18:22:55 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:37.715 1+0 records in 00:05:37.715 1+0 records out 00:05:37.715 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00465934 s, 225 MB/s 00:05:37.715 18:22:55 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:37.715 18:22:55 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:37.715 18:22:55 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:37.715 18:22:55 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:37.715 18:22:55 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:37.715 No valid GPT data, bailing 00:05:37.715 18:22:55 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:37.715 18:22:55 -- scripts/common.sh@394 -- # pt= 00:05:37.715 18:22:55 -- scripts/common.sh@395 -- # return 1 00:05:37.715 18:22:55 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:37.715 1+0 records in 00:05:37.715 1+0 records out 00:05:37.715 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00446914 s, 235 MB/s 00:05:37.715 18:22:55 -- spdk/autotest.sh@105 -- # sync 00:05:37.715 18:22:55 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:37.715 18:22:55 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:37.715 18:22:55 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:39.620 18:22:57 -- spdk/autotest.sh@111 -- # uname -s 00:05:39.620 18:22:57 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:39.620 18:22:57 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:39.620 18:22:57 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:40.196 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:40.196 Hugepages 00:05:40.196 node hugesize free / total 00:05:40.196 node0 1048576kB 0 / 0 00:05:40.196 node0 2048kB 0 / 0 00:05:40.196 00:05:40.196 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:40.196 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:40.454 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:40.454 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:40.454 18:22:58 -- spdk/autotest.sh@117 -- # uname -s 00:05:40.454 18:22:58 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:40.454 18:22:58 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:40.454 18:22:58 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:41.021 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:41.281 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:41.281 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:41.281 18:22:59 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:42.217 18:23:00 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:42.217 18:23:00 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:42.217 18:23:00 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:42.217 18:23:00 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:42.217 18:23:00 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:42.217 18:23:00 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:42.217 18:23:00 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:42.217 18:23:00 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:42.217 18:23:00 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:42.476 18:23:00 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:42.476 18:23:00 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:42.476 18:23:00 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:42.734 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:42.734 Waiting for block devices as requested 00:05:42.734 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:42.992 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:42.992 18:23:00 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:42.992 18:23:00 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:42.992 18:23:00 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:42.992 18:23:00 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:42.992 18:23:00 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:42.992 18:23:00 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:42.992 18:23:00 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:42.992 18:23:00 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:42.992 18:23:00 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:42.992 18:23:00 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:42.992 18:23:00 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:42.992 18:23:00 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:42.992 18:23:00 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:42.992 18:23:00 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:42.992 18:23:00 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:42.992 18:23:00 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:42.992 18:23:00 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:42.992 18:23:00 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:42.992 18:23:00 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:42.992 18:23:00 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:42.992 18:23:00 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:42.992 18:23:00 -- common/autotest_common.sh@1541 -- # continue 00:05:42.992 18:23:00 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:42.992 18:23:00 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:42.992 18:23:00 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:42.992 18:23:00 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:42.992 18:23:00 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:42.992 18:23:00 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:42.992 18:23:00 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:42.992 18:23:00 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:42.992 18:23:00 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:42.992 18:23:00 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:42.992 18:23:00 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:42.992 18:23:00 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:42.992 18:23:00 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:42.992 18:23:00 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:42.992 18:23:00 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:42.992 18:23:00 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:42.992 18:23:00 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:42.992 18:23:00 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:42.992 18:23:00 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:42.992 18:23:00 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:42.992 18:23:00 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:42.992 18:23:00 -- common/autotest_common.sh@1541 -- # continue 00:05:42.992 18:23:00 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:42.992 18:23:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:42.992 18:23:00 -- common/autotest_common.sh@10 -- # set +x 00:05:42.992 18:23:00 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:42.992 18:23:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:42.992 18:23:00 -- common/autotest_common.sh@10 -- # set +x 00:05:42.992 18:23:00 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:43.559 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:43.819 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.819 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.819 18:23:01 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:43.819 18:23:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:43.819 18:23:01 -- common/autotest_common.sh@10 -- # set +x 00:05:43.819 18:23:01 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:43.819 18:23:01 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:43.819 18:23:01 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:43.819 18:23:01 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:43.819 18:23:01 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:43.819 18:23:01 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:43.819 18:23:01 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:43.819 18:23:01 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:43.819 18:23:01 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:43.819 18:23:01 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:43.819 18:23:01 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:43.819 18:23:01 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:43.819 18:23:01 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:44.079 18:23:01 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:44.079 18:23:01 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:44.079 18:23:01 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:44.079 18:23:01 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:44.079 18:23:01 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:44.079 18:23:01 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:44.079 18:23:01 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:44.079 18:23:01 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:44.079 18:23:01 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:44.079 18:23:01 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:44.079 18:23:01 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:44.079 18:23:01 -- common/autotest_common.sh@1570 -- # return 0 00:05:44.079 18:23:01 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:44.079 18:23:01 -- common/autotest_common.sh@1578 -- # return 0 00:05:44.079 18:23:01 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:44.079 18:23:01 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:44.079 18:23:01 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:44.079 18:23:01 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:44.079 18:23:01 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:44.079 18:23:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:44.079 18:23:01 -- common/autotest_common.sh@10 -- # set +x 00:05:44.079 18:23:01 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:05:44.079 18:23:01 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:05:44.079 18:23:01 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:05:44.079 18:23:01 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:44.079 18:23:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.079 18:23:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.079 18:23:01 -- common/autotest_common.sh@10 -- # set +x 00:05:44.079 ************************************ 00:05:44.079 START TEST env 00:05:44.079 ************************************ 00:05:44.079 18:23:01 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:44.079 * Looking for test storage... 00:05:44.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:44.079 18:23:01 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:44.079 18:23:01 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:44.079 18:23:01 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:44.079 18:23:01 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:44.079 18:23:01 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.079 18:23:01 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.079 18:23:01 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.079 18:23:01 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.079 18:23:01 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.079 18:23:01 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.079 18:23:01 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.079 18:23:01 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.079 18:23:01 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.079 18:23:01 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.079 18:23:01 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.079 18:23:01 env -- scripts/common.sh@344 -- # case "$op" in 00:05:44.079 18:23:01 env -- scripts/common.sh@345 -- # : 1 00:05:44.079 18:23:01 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.079 18:23:01 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.079 18:23:01 env -- scripts/common.sh@365 -- # decimal 1 00:05:44.079 18:23:01 env -- scripts/common.sh@353 -- # local d=1 00:05:44.079 18:23:01 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.079 18:23:01 env -- scripts/common.sh@355 -- # echo 1 00:05:44.079 18:23:01 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.079 18:23:01 env -- scripts/common.sh@366 -- # decimal 2 00:05:44.079 18:23:01 env -- scripts/common.sh@353 -- # local d=2 00:05:44.079 18:23:01 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.079 18:23:01 env -- scripts/common.sh@355 -- # echo 2 00:05:44.079 18:23:01 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.079 18:23:01 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.079 18:23:01 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.079 18:23:01 env -- scripts/common.sh@368 -- # return 0 00:05:44.080 18:23:01 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.080 18:23:01 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:44.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.080 --rc genhtml_branch_coverage=1 00:05:44.080 --rc genhtml_function_coverage=1 00:05:44.080 --rc genhtml_legend=1 00:05:44.080 --rc geninfo_all_blocks=1 00:05:44.080 --rc geninfo_unexecuted_blocks=1 00:05:44.080 00:05:44.080 ' 00:05:44.080 18:23:01 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:44.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.080 --rc genhtml_branch_coverage=1 00:05:44.080 --rc genhtml_function_coverage=1 00:05:44.080 --rc genhtml_legend=1 00:05:44.080 --rc geninfo_all_blocks=1 00:05:44.080 --rc geninfo_unexecuted_blocks=1 00:05:44.080 00:05:44.080 ' 00:05:44.080 18:23:01 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:44.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.080 --rc genhtml_branch_coverage=1 00:05:44.080 --rc genhtml_function_coverage=1 00:05:44.080 --rc genhtml_legend=1 00:05:44.080 --rc geninfo_all_blocks=1 00:05:44.080 --rc geninfo_unexecuted_blocks=1 00:05:44.080 00:05:44.080 ' 00:05:44.080 18:23:01 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:44.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.080 --rc genhtml_branch_coverage=1 00:05:44.080 --rc genhtml_function_coverage=1 00:05:44.080 --rc genhtml_legend=1 00:05:44.080 --rc geninfo_all_blocks=1 00:05:44.080 --rc geninfo_unexecuted_blocks=1 00:05:44.080 00:05:44.080 ' 00:05:44.080 18:23:01 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:44.080 18:23:01 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.080 18:23:01 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.080 18:23:01 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.080 ************************************ 00:05:44.080 START TEST env_memory 00:05:44.080 ************************************ 00:05:44.080 18:23:02 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:44.339 00:05:44.339 00:05:44.339 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.339 http://cunit.sourceforge.net/ 00:05:44.339 00:05:44.339 00:05:44.339 Suite: memory 00:05:44.339 Test: alloc and free memory map ...[2024-12-08 18:23:02.053762] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:44.339 passed 00:05:44.339 Test: mem map translation ...[2024-12-08 18:23:02.084721] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:44.339 [2024-12-08 18:23:02.084759] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:44.339 [2024-12-08 18:23:02.084813] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:44.339 [2024-12-08 18:23:02.084824] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:44.339 passed 00:05:44.339 Test: mem map registration ...[2024-12-08 18:23:02.148663] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:44.339 [2024-12-08 18:23:02.148694] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:44.339 passed 00:05:44.339 Test: mem map adjacent registrations ...passed 00:05:44.339 00:05:44.339 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.339 suites 1 1 n/a 0 0 00:05:44.339 tests 4 4 4 0 0 00:05:44.339 asserts 152 152 152 0 n/a 00:05:44.339 00:05:44.339 Elapsed time = 0.213 seconds 00:05:44.339 00:05:44.339 real 0m0.231s 00:05:44.339 user 0m0.215s 00:05:44.339 sys 0m0.011s 00:05:44.339 18:23:02 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.339 18:23:02 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:44.339 ************************************ 00:05:44.339 END TEST env_memory 00:05:44.339 ************************************ 00:05:44.599 18:23:02 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:44.599 18:23:02 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.599 18:23:02 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.599 18:23:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.599 ************************************ 00:05:44.599 START TEST env_vtophys 00:05:44.599 ************************************ 00:05:44.599 18:23:02 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:44.599 EAL: lib.eal log level changed from notice to debug 00:05:44.599 EAL: Detected lcore 0 as core 0 on socket 0 00:05:44.599 EAL: Detected lcore 1 as core 0 on socket 0 00:05:44.599 EAL: Detected lcore 2 as core 0 on socket 0 00:05:44.599 EAL: Detected lcore 3 as core 0 on socket 0 00:05:44.599 EAL: Detected lcore 4 as core 0 on socket 0 00:05:44.599 EAL: Detected lcore 5 as core 0 on socket 0 00:05:44.599 EAL: Detected lcore 6 as core 0 on socket 0 00:05:44.599 EAL: Detected lcore 7 as core 0 on socket 0 00:05:44.599 EAL: Detected lcore 8 as core 0 on socket 0 00:05:44.599 EAL: Detected lcore 9 as core 0 on socket 0 00:05:44.599 EAL: Maximum logical cores by configuration: 128 00:05:44.599 EAL: Detected CPU lcores: 10 00:05:44.599 EAL: Detected NUMA nodes: 1 00:05:44.599 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:44.599 EAL: Detected shared linkage of DPDK 00:05:44.599 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:44.599 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:44.599 EAL: Registered [vdev] bus. 00:05:44.599 EAL: bus.vdev log level changed from disabled to notice 00:05:44.599 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:44.599 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:44.599 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:44.599 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:44.599 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:44.599 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:44.599 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:44.599 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:44.599 EAL: No shared files mode enabled, IPC will be disabled 00:05:44.599 EAL: No shared files mode enabled, IPC is disabled 00:05:44.599 EAL: Selected IOVA mode 'PA' 00:05:44.599 EAL: Probing VFIO support... 00:05:44.599 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:44.599 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:44.599 EAL: Ask a virtual area of 0x2e000 bytes 00:05:44.599 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:44.599 EAL: Setting up physically contiguous memory... 00:05:44.599 EAL: Setting maximum number of open files to 524288 00:05:44.599 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:44.599 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:44.599 EAL: Ask a virtual area of 0x61000 bytes 00:05:44.599 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:44.599 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:44.599 EAL: Ask a virtual area of 0x400000000 bytes 00:05:44.599 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:44.599 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:44.599 EAL: Ask a virtual area of 0x61000 bytes 00:05:44.599 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:44.599 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:44.599 EAL: Ask a virtual area of 0x400000000 bytes 00:05:44.599 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:44.599 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:44.599 EAL: Ask a virtual area of 0x61000 bytes 00:05:44.599 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:44.599 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:44.599 EAL: Ask a virtual area of 0x400000000 bytes 00:05:44.599 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:44.599 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:44.599 EAL: Ask a virtual area of 0x61000 bytes 00:05:44.599 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:44.599 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:44.599 EAL: Ask a virtual area of 0x400000000 bytes 00:05:44.599 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:44.599 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:44.600 EAL: Hugepages will be freed exactly as allocated. 00:05:44.600 EAL: No shared files mode enabled, IPC is disabled 00:05:44.600 EAL: No shared files mode enabled, IPC is disabled 00:05:44.600 EAL: TSC frequency is ~2200000 KHz 00:05:44.600 EAL: Main lcore 0 is ready (tid=7fd142245a00;cpuset=[0]) 00:05:44.600 EAL: Trying to obtain current memory policy. 00:05:44.600 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.600 EAL: Restoring previous memory policy: 0 00:05:44.600 EAL: request: mp_malloc_sync 00:05:44.600 EAL: No shared files mode enabled, IPC is disabled 00:05:44.600 EAL: Heap on socket 0 was expanded by 2MB 00:05:44.600 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:44.600 EAL: No shared files mode enabled, IPC is disabled 00:05:44.600 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:44.600 EAL: Mem event callback 'spdk:(nil)' registered 00:05:44.600 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:44.600 00:05:44.600 00:05:44.600 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.600 http://cunit.sourceforge.net/ 00:05:44.600 00:05:44.600 00:05:44.600 Suite: components_suite 00:05:44.600 Test: vtophys_malloc_test ...passed 00:05:44.600 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:44.600 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.600 EAL: Restoring previous memory policy: 4 00:05:44.600 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.600 EAL: request: mp_malloc_sync 00:05:44.600 EAL: No shared files mode enabled, IPC is disabled 00:05:44.600 EAL: Heap on socket 0 was expanded by 4MB 00:05:44.600 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.600 EAL: request: mp_malloc_sync 00:05:44.600 EAL: No shared files mode enabled, IPC is disabled 00:05:44.600 EAL: Heap on socket 0 was shrunk by 4MB 00:05:44.600 EAL: Trying to obtain current memory policy. 00:05:44.600 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.600 EAL: Restoring previous memory policy: 4 00:05:44.600 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.600 EAL: request: mp_malloc_sync 00:05:44.600 EAL: No shared files mode enabled, IPC is disabled 00:05:44.600 EAL: Heap on socket 0 was expanded by 6MB 00:05:44.600 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.600 EAL: request: mp_malloc_sync 00:05:44.600 EAL: No shared files mode enabled, IPC is disabled 00:05:44.600 EAL: Heap on socket 0 was shrunk by 6MB 00:05:44.600 EAL: Trying to obtain current memory policy. 00:05:44.600 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.600 EAL: Restoring previous memory policy: 4 00:05:44.600 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.600 EAL: request: mp_malloc_sync 00:05:44.600 EAL: No shared files mode enabled, IPC is disabled 00:05:44.600 EAL: Heap on socket 0 was expanded by 10MB 00:05:44.600 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.600 EAL: request: mp_malloc_sync 00:05:44.600 EAL: No shared files mode enabled, IPC is disabled 00:05:44.600 EAL: Heap on socket 0 was shrunk by 10MB 00:05:44.600 EAL: Trying to obtain current memory policy. 00:05:44.600 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.600 EAL: Restoring previous memory policy: 4 00:05:44.600 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.600 EAL: request: mp_malloc_sync 00:05:44.600 EAL: No shared files mode enabled, IPC is disabled 00:05:44.600 EAL: Heap on socket 0 was expanded by 18MB 00:05:44.600 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.600 EAL: request: mp_malloc_sync 00:05:44.600 EAL: No shared files mode enabled, IPC is disabled 00:05:44.600 EAL: Heap on socket 0 was shrunk by 18MB 00:05:44.600 EAL: Trying to obtain current memory policy. 00:05:44.600 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.600 EAL: Restoring previous memory policy: 4 00:05:44.600 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.600 EAL: request: mp_malloc_sync 00:05:44.600 EAL: No shared files mode enabled, IPC is disabled 00:05:44.600 EAL: Heap on socket 0 was expanded by 34MB 00:05:44.600 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.600 EAL: request: mp_malloc_sync 00:05:44.600 EAL: No shared files mode enabled, IPC is disabled 00:05:44.600 EAL: Heap on socket 0 was shrunk by 34MB 00:05:44.600 EAL: Trying to obtain current memory policy. 00:05:44.600 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.600 EAL: Restoring previous memory policy: 4 00:05:44.600 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.600 EAL: request: mp_malloc_sync 00:05:44.600 EAL: No shared files mode enabled, IPC is disabled 00:05:44.600 EAL: Heap on socket 0 was expanded by 66MB 00:05:44.600 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.600 EAL: request: mp_malloc_sync 00:05:44.600 EAL: No shared files mode enabled, IPC is disabled 00:05:44.600 EAL: Heap on socket 0 was shrunk by 66MB 00:05:44.600 EAL: Trying to obtain current memory policy. 00:05:44.600 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.600 EAL: Restoring previous memory policy: 4 00:05:44.600 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.600 EAL: request: mp_malloc_sync 00:05:44.600 EAL: No shared files mode enabled, IPC is disabled 00:05:44.600 EAL: Heap on socket 0 was expanded by 130MB 00:05:44.859 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.859 EAL: request: mp_malloc_sync 00:05:44.859 EAL: No shared files mode enabled, IPC is disabled 00:05:44.859 EAL: Heap on socket 0 was shrunk by 130MB 00:05:44.859 EAL: Trying to obtain current memory policy. 00:05:44.859 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.859 EAL: Restoring previous memory policy: 4 00:05:44.859 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.859 EAL: request: mp_malloc_sync 00:05:44.859 EAL: No shared files mode enabled, IPC is disabled 00:05:44.859 EAL: Heap on socket 0 was expanded by 258MB 00:05:44.859 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.859 EAL: request: mp_malloc_sync 00:05:44.859 EAL: No shared files mode enabled, IPC is disabled 00:05:44.859 EAL: Heap on socket 0 was shrunk by 258MB 00:05:44.859 EAL: Trying to obtain current memory policy. 00:05:44.859 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.119 EAL: Restoring previous memory policy: 4 00:05:45.119 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.119 EAL: request: mp_malloc_sync 00:05:45.119 EAL: No shared files mode enabled, IPC is disabled 00:05:45.119 EAL: Heap on socket 0 was expanded by 514MB 00:05:45.119 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.119 EAL: request: mp_malloc_sync 00:05:45.119 EAL: No shared files mode enabled, IPC is disabled 00:05:45.119 EAL: Heap on socket 0 was shrunk by 514MB 00:05:45.119 EAL: Trying to obtain current memory policy. 00:05:45.119 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.379 EAL: Restoring previous memory policy: 4 00:05:45.379 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.379 EAL: request: mp_malloc_sync 00:05:45.379 EAL: No shared files mode enabled, IPC is disabled 00:05:45.379 EAL: Heap on socket 0 was expanded by 1026MB 00:05:45.638 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.897 passed 00:05:45.897 00:05:45.897 Run Summary: Type Total Ran Passed Failed Inactive 00:05:45.897 suites 1 1 n/a 0 0 00:05:45.897 tests 2 2 2 0 0 00:05:45.897 asserts 6100 6100 6100 0 n/a 00:05:45.897 00:05:45.897 Elapsed time = 1.196 seconds 00:05:45.897 EAL: request: mp_malloc_sync 00:05:45.897 EAL: No shared files mode enabled, IPC is disabled 00:05:45.897 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:45.897 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.897 EAL: request: mp_malloc_sync 00:05:45.897 EAL: No shared files mode enabled, IPC is disabled 00:05:45.897 EAL: Heap on socket 0 was shrunk by 2MB 00:05:45.897 EAL: No shared files mode enabled, IPC is disabled 00:05:45.897 EAL: No shared files mode enabled, IPC is disabled 00:05:45.897 EAL: No shared files mode enabled, IPC is disabled 00:05:45.897 00:05:45.897 real 0m1.385s 00:05:45.897 user 0m0.773s 00:05:45.897 sys 0m0.483s 00:05:45.897 18:23:03 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.897 ************************************ 00:05:45.897 END TEST env_vtophys 00:05:45.897 ************************************ 00:05:45.897 18:23:03 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:45.897 18:23:03 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:45.897 18:23:03 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.897 18:23:03 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.897 18:23:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:45.897 ************************************ 00:05:45.897 START TEST env_pci 00:05:45.897 ************************************ 00:05:45.897 18:23:03 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:45.897 00:05:45.897 00:05:45.897 CUnit - A unit testing framework for C - Version 2.1-3 00:05:45.897 http://cunit.sourceforge.net/ 00:05:45.897 00:05:45.897 00:05:45.897 Suite: pci 00:05:45.897 Test: pci_hook ...[2024-12-08 18:23:03.739075] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68704 has claimed it 00:05:45.897 passed 00:05:45.897 00:05:45.897 Run Summary: Type Total Ran Passed Failed Inactive 00:05:45.897 suites 1 1 n/a 0 0 00:05:45.897 tests 1 1 1 0 0 00:05:45.897 asserts 25 25 25 0 n/a 00:05:45.897 00:05:45.897 Elapsed time = 0.002 seconds 00:05:45.897 EAL: Cannot find device (10000:00:01.0) 00:05:45.897 EAL: Failed to attach device on primary process 00:05:45.897 00:05:45.897 real 0m0.017s 00:05:45.897 user 0m0.007s 00:05:45.897 sys 0m0.010s 00:05:45.897 18:23:03 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.897 ************************************ 00:05:45.897 END TEST env_pci 00:05:45.897 ************************************ 00:05:45.897 18:23:03 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:45.897 18:23:03 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:45.897 18:23:03 env -- env/env.sh@15 -- # uname 00:05:45.897 18:23:03 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:45.897 18:23:03 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:45.897 18:23:03 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:45.897 18:23:03 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:45.897 18:23:03 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.898 18:23:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:45.898 ************************************ 00:05:45.898 START TEST env_dpdk_post_init 00:05:45.898 ************************************ 00:05:45.898 18:23:03 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:46.157 EAL: Detected CPU lcores: 10 00:05:46.157 EAL: Detected NUMA nodes: 1 00:05:46.157 EAL: Detected shared linkage of DPDK 00:05:46.157 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:46.157 EAL: Selected IOVA mode 'PA' 00:05:46.157 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:46.157 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:46.157 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:46.157 Starting DPDK initialization... 00:05:46.157 Starting SPDK post initialization... 00:05:46.157 SPDK NVMe probe 00:05:46.157 Attaching to 0000:00:10.0 00:05:46.157 Attaching to 0000:00:11.0 00:05:46.157 Attached to 0000:00:10.0 00:05:46.157 Attached to 0000:00:11.0 00:05:46.157 Cleaning up... 00:05:46.157 00:05:46.157 real 0m0.170s 00:05:46.157 user 0m0.039s 00:05:46.157 sys 0m0.032s 00:05:46.157 18:23:03 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.157 ************************************ 00:05:46.157 END TEST env_dpdk_post_init 00:05:46.157 ************************************ 00:05:46.157 18:23:03 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:46.157 18:23:04 env -- env/env.sh@26 -- # uname 00:05:46.157 18:23:04 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:46.157 18:23:04 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:46.157 18:23:04 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.157 18:23:04 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.157 18:23:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.157 ************************************ 00:05:46.157 START TEST env_mem_callbacks 00:05:46.157 ************************************ 00:05:46.157 18:23:04 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:46.157 EAL: Detected CPU lcores: 10 00:05:46.157 EAL: Detected NUMA nodes: 1 00:05:46.157 EAL: Detected shared linkage of DPDK 00:05:46.157 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:46.157 EAL: Selected IOVA mode 'PA' 00:05:46.416 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:46.416 00:05:46.416 00:05:46.416 CUnit - A unit testing framework for C - Version 2.1-3 00:05:46.416 http://cunit.sourceforge.net/ 00:05:46.416 00:05:46.416 00:05:46.416 Suite: memory 00:05:46.416 Test: test ... 00:05:46.416 register 0x200000200000 2097152 00:05:46.416 malloc 3145728 00:05:46.416 register 0x200000400000 4194304 00:05:46.416 buf 0x200000500000 len 3145728 PASSED 00:05:46.416 malloc 64 00:05:46.416 buf 0x2000004fff40 len 64 PASSED 00:05:46.416 malloc 4194304 00:05:46.416 register 0x200000800000 6291456 00:05:46.416 buf 0x200000a00000 len 4194304 PASSED 00:05:46.416 free 0x200000500000 3145728 00:05:46.416 free 0x2000004fff40 64 00:05:46.416 unregister 0x200000400000 4194304 PASSED 00:05:46.416 free 0x200000a00000 4194304 00:05:46.416 unregister 0x200000800000 6291456 PASSED 00:05:46.416 malloc 8388608 00:05:46.416 register 0x200000400000 10485760 00:05:46.416 buf 0x200000600000 len 8388608 PASSED 00:05:46.416 free 0x200000600000 8388608 00:05:46.416 unregister 0x200000400000 10485760 PASSED 00:05:46.416 passed 00:05:46.416 00:05:46.416 Run Summary: Type Total Ran Passed Failed Inactive 00:05:46.416 suites 1 1 n/a 0 0 00:05:46.416 tests 1 1 1 0 0 00:05:46.416 asserts 15 15 15 0 n/a 00:05:46.416 00:05:46.416 Elapsed time = 0.007 seconds 00:05:46.416 00:05:46.416 real 0m0.137s 00:05:46.416 user 0m0.013s 00:05:46.416 sys 0m0.022s 00:05:46.416 18:23:04 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.416 ************************************ 00:05:46.416 END TEST env_mem_callbacks 00:05:46.416 ************************************ 00:05:46.416 18:23:04 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:46.416 00:05:46.416 real 0m2.394s 00:05:46.416 user 0m1.263s 00:05:46.416 sys 0m0.786s 00:05:46.416 18:23:04 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.416 ************************************ 00:05:46.416 END TEST env 00:05:46.416 ************************************ 00:05:46.416 18:23:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.416 18:23:04 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:46.416 18:23:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.416 18:23:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.416 18:23:04 -- common/autotest_common.sh@10 -- # set +x 00:05:46.416 ************************************ 00:05:46.416 START TEST rpc 00:05:46.416 ************************************ 00:05:46.416 18:23:04 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:46.416 * Looking for test storage... 00:05:46.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:46.416 18:23:04 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:46.416 18:23:04 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:46.416 18:23:04 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:46.676 18:23:04 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:46.676 18:23:04 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.676 18:23:04 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.676 18:23:04 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.676 18:23:04 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.676 18:23:04 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.676 18:23:04 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.676 18:23:04 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.676 18:23:04 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.676 18:23:04 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.676 18:23:04 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.676 18:23:04 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.676 18:23:04 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:46.676 18:23:04 rpc -- scripts/common.sh@345 -- # : 1 00:05:46.676 18:23:04 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.676 18:23:04 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.676 18:23:04 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:46.676 18:23:04 rpc -- scripts/common.sh@353 -- # local d=1 00:05:46.676 18:23:04 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.676 18:23:04 rpc -- scripts/common.sh@355 -- # echo 1 00:05:46.676 18:23:04 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.676 18:23:04 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:46.676 18:23:04 rpc -- scripts/common.sh@353 -- # local d=2 00:05:46.676 18:23:04 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.676 18:23:04 rpc -- scripts/common.sh@355 -- # echo 2 00:05:46.676 18:23:04 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.676 18:23:04 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.676 18:23:04 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.676 18:23:04 rpc -- scripts/common.sh@368 -- # return 0 00:05:46.676 18:23:04 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.676 18:23:04 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:46.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.676 --rc genhtml_branch_coverage=1 00:05:46.676 --rc genhtml_function_coverage=1 00:05:46.676 --rc genhtml_legend=1 00:05:46.676 --rc geninfo_all_blocks=1 00:05:46.676 --rc geninfo_unexecuted_blocks=1 00:05:46.676 00:05:46.676 ' 00:05:46.676 18:23:04 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:46.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.676 --rc genhtml_branch_coverage=1 00:05:46.676 --rc genhtml_function_coverage=1 00:05:46.676 --rc genhtml_legend=1 00:05:46.676 --rc geninfo_all_blocks=1 00:05:46.676 --rc geninfo_unexecuted_blocks=1 00:05:46.676 00:05:46.676 ' 00:05:46.676 18:23:04 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:46.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.676 --rc genhtml_branch_coverage=1 00:05:46.676 --rc genhtml_function_coverage=1 00:05:46.676 --rc genhtml_legend=1 00:05:46.676 --rc geninfo_all_blocks=1 00:05:46.676 --rc geninfo_unexecuted_blocks=1 00:05:46.676 00:05:46.676 ' 00:05:46.676 18:23:04 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:46.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.676 --rc genhtml_branch_coverage=1 00:05:46.676 --rc genhtml_function_coverage=1 00:05:46.676 --rc genhtml_legend=1 00:05:46.676 --rc geninfo_all_blocks=1 00:05:46.676 --rc geninfo_unexecuted_blocks=1 00:05:46.676 00:05:46.676 ' 00:05:46.676 18:23:04 rpc -- rpc/rpc.sh@65 -- # spdk_pid=68822 00:05:46.676 18:23:04 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:46.676 18:23:04 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.676 18:23:04 rpc -- rpc/rpc.sh@67 -- # waitforlisten 68822 00:05:46.676 18:23:04 rpc -- common/autotest_common.sh@831 -- # '[' -z 68822 ']' 00:05:46.676 18:23:04 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.676 18:23:04 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.676 18:23:04 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.676 18:23:04 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.676 18:23:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.676 [2024-12-08 18:23:04.479804] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:46.676 [2024-12-08 18:23:04.479913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68822 ] 00:05:46.936 [2024-12-08 18:23:04.615874] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.936 [2024-12-08 18:23:04.678199] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:46.936 [2024-12-08 18:23:04.678275] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 68822' to capture a snapshot of events at runtime. 00:05:46.936 [2024-12-08 18:23:04.678302] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:46.936 [2024-12-08 18:23:04.678310] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:46.936 [2024-12-08 18:23:04.678316] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid68822 for offline analysis/debug. 00:05:46.936 [2024-12-08 18:23:04.678355] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.936 [2024-12-08 18:23:04.740532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:47.195 18:23:04 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.195 18:23:04 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:47.195 18:23:04 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:47.195 18:23:04 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:47.195 18:23:04 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:47.195 18:23:04 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:47.195 18:23:04 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.195 18:23:04 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.195 18:23:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.195 ************************************ 00:05:47.195 START TEST rpc_integrity 00:05:47.195 ************************************ 00:05:47.195 18:23:04 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:47.195 18:23:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:47.195 18:23:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.195 18:23:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.195 18:23:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.195 18:23:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:47.195 18:23:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:47.195 18:23:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:47.195 18:23:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:47.195 18:23:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.195 18:23:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.195 18:23:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.195 18:23:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:47.195 18:23:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:47.195 18:23:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.195 18:23:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.195 18:23:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.195 18:23:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:47.195 { 00:05:47.195 "name": "Malloc0", 00:05:47.195 "aliases": [ 00:05:47.195 "87f7ca59-beac-4a7c-83f8-bdeb6f15934b" 00:05:47.195 ], 00:05:47.195 "product_name": "Malloc disk", 00:05:47.195 "block_size": 512, 00:05:47.195 "num_blocks": 16384, 00:05:47.195 "uuid": "87f7ca59-beac-4a7c-83f8-bdeb6f15934b", 00:05:47.195 "assigned_rate_limits": { 00:05:47.195 "rw_ios_per_sec": 0, 00:05:47.195 "rw_mbytes_per_sec": 0, 00:05:47.195 "r_mbytes_per_sec": 0, 00:05:47.195 "w_mbytes_per_sec": 0 00:05:47.195 }, 00:05:47.195 "claimed": false, 00:05:47.195 "zoned": false, 00:05:47.195 "supported_io_types": { 00:05:47.195 "read": true, 00:05:47.195 "write": true, 00:05:47.195 "unmap": true, 00:05:47.195 "flush": true, 00:05:47.195 "reset": true, 00:05:47.195 "nvme_admin": false, 00:05:47.195 "nvme_io": false, 00:05:47.195 "nvme_io_md": false, 00:05:47.195 "write_zeroes": true, 00:05:47.195 "zcopy": true, 00:05:47.195 "get_zone_info": false, 00:05:47.195 "zone_management": false, 00:05:47.195 "zone_append": false, 00:05:47.195 "compare": false, 00:05:47.195 "compare_and_write": false, 00:05:47.195 "abort": true, 00:05:47.195 "seek_hole": false, 00:05:47.195 "seek_data": false, 00:05:47.195 "copy": true, 00:05:47.195 "nvme_iov_md": false 00:05:47.195 }, 00:05:47.195 "memory_domains": [ 00:05:47.195 { 00:05:47.195 "dma_device_id": "system", 00:05:47.195 "dma_device_type": 1 00:05:47.195 }, 00:05:47.195 { 00:05:47.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.195 "dma_device_type": 2 00:05:47.195 } 00:05:47.195 ], 00:05:47.195 "driver_specific": {} 00:05:47.195 } 00:05:47.195 ]' 00:05:47.195 18:23:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:47.195 18:23:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:47.195 18:23:05 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:47.195 18:23:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.195 18:23:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.195 [2024-12-08 18:23:05.088536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:47.195 [2024-12-08 18:23:05.088598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:47.195 [2024-12-08 18:23:05.088616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f9f500 00:05:47.195 [2024-12-08 18:23:05.088625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:47.195 [2024-12-08 18:23:05.090040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:47.195 [2024-12-08 18:23:05.090070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:47.196 Passthru0 00:05:47.196 18:23:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.196 18:23:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:47.196 18:23:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.196 18:23:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.454 18:23:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.454 18:23:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:47.454 { 00:05:47.454 "name": "Malloc0", 00:05:47.454 "aliases": [ 00:05:47.454 "87f7ca59-beac-4a7c-83f8-bdeb6f15934b" 00:05:47.454 ], 00:05:47.454 "product_name": "Malloc disk", 00:05:47.454 "block_size": 512, 00:05:47.454 "num_blocks": 16384, 00:05:47.454 "uuid": "87f7ca59-beac-4a7c-83f8-bdeb6f15934b", 00:05:47.454 "assigned_rate_limits": { 00:05:47.454 "rw_ios_per_sec": 0, 00:05:47.454 "rw_mbytes_per_sec": 0, 00:05:47.454 "r_mbytes_per_sec": 0, 00:05:47.454 "w_mbytes_per_sec": 0 00:05:47.454 }, 00:05:47.454 "claimed": true, 00:05:47.454 "claim_type": "exclusive_write", 00:05:47.454 "zoned": false, 00:05:47.454 "supported_io_types": { 00:05:47.454 "read": true, 00:05:47.454 "write": true, 00:05:47.454 "unmap": true, 00:05:47.454 "flush": true, 00:05:47.454 "reset": true, 00:05:47.454 "nvme_admin": false, 00:05:47.454 "nvme_io": false, 00:05:47.454 "nvme_io_md": false, 00:05:47.454 "write_zeroes": true, 00:05:47.454 "zcopy": true, 00:05:47.454 "get_zone_info": false, 00:05:47.454 "zone_management": false, 00:05:47.454 "zone_append": false, 00:05:47.454 "compare": false, 00:05:47.454 "compare_and_write": false, 00:05:47.454 "abort": true, 00:05:47.454 "seek_hole": false, 00:05:47.454 "seek_data": false, 00:05:47.454 "copy": true, 00:05:47.454 "nvme_iov_md": false 00:05:47.454 }, 00:05:47.454 "memory_domains": [ 00:05:47.454 { 00:05:47.454 "dma_device_id": "system", 00:05:47.454 "dma_device_type": 1 00:05:47.454 }, 00:05:47.454 { 00:05:47.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.454 "dma_device_type": 2 00:05:47.454 } 00:05:47.454 ], 00:05:47.454 "driver_specific": {} 00:05:47.454 }, 00:05:47.454 { 00:05:47.454 "name": "Passthru0", 00:05:47.454 "aliases": [ 00:05:47.454 "80e2e6c2-6c64-526e-bb83-4e5272c14b5c" 00:05:47.454 ], 00:05:47.454 "product_name": "passthru", 00:05:47.454 "block_size": 512, 00:05:47.454 "num_blocks": 16384, 00:05:47.454 "uuid": "80e2e6c2-6c64-526e-bb83-4e5272c14b5c", 00:05:47.454 "assigned_rate_limits": { 00:05:47.454 "rw_ios_per_sec": 0, 00:05:47.454 "rw_mbytes_per_sec": 0, 00:05:47.454 "r_mbytes_per_sec": 0, 00:05:47.454 "w_mbytes_per_sec": 0 00:05:47.454 }, 00:05:47.454 "claimed": false, 00:05:47.454 "zoned": false, 00:05:47.454 "supported_io_types": { 00:05:47.454 "read": true, 00:05:47.454 "write": true, 00:05:47.454 "unmap": true, 00:05:47.454 "flush": true, 00:05:47.454 "reset": true, 00:05:47.454 "nvme_admin": false, 00:05:47.454 "nvme_io": false, 00:05:47.454 "nvme_io_md": false, 00:05:47.454 "write_zeroes": true, 00:05:47.454 "zcopy": true, 00:05:47.454 "get_zone_info": false, 00:05:47.454 "zone_management": false, 00:05:47.454 "zone_append": false, 00:05:47.454 "compare": false, 00:05:47.454 "compare_and_write": false, 00:05:47.454 "abort": true, 00:05:47.454 "seek_hole": false, 00:05:47.454 "seek_data": false, 00:05:47.454 "copy": true, 00:05:47.454 "nvme_iov_md": false 00:05:47.454 }, 00:05:47.454 "memory_domains": [ 00:05:47.454 { 00:05:47.454 "dma_device_id": "system", 00:05:47.454 "dma_device_type": 1 00:05:47.454 }, 00:05:47.454 { 00:05:47.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.454 "dma_device_type": 2 00:05:47.454 } 00:05:47.454 ], 00:05:47.454 "driver_specific": { 00:05:47.454 "passthru": { 00:05:47.454 "name": "Passthru0", 00:05:47.454 "base_bdev_name": "Malloc0" 00:05:47.454 } 00:05:47.454 } 00:05:47.454 } 00:05:47.454 ]' 00:05:47.454 18:23:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:47.454 18:23:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:47.454 18:23:05 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:47.454 18:23:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.454 18:23:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.454 18:23:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.454 18:23:05 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:47.454 18:23:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.454 18:23:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.454 18:23:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.454 18:23:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:47.454 18:23:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.454 18:23:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.454 18:23:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.454 18:23:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:47.454 18:23:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:47.454 18:23:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:47.454 00:05:47.454 real 0m0.325s 00:05:47.454 user 0m0.224s 00:05:47.454 sys 0m0.037s 00:05:47.454 18:23:05 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.454 18:23:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.454 ************************************ 00:05:47.454 END TEST rpc_integrity 00:05:47.454 ************************************ 00:05:47.454 18:23:05 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:47.454 18:23:05 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.454 18:23:05 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.454 18:23:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.454 ************************************ 00:05:47.454 START TEST rpc_plugins 00:05:47.454 ************************************ 00:05:47.454 18:23:05 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:47.454 18:23:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:47.454 18:23:05 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.454 18:23:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:47.454 18:23:05 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.454 18:23:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:47.454 18:23:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:47.454 18:23:05 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.454 18:23:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:47.454 18:23:05 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.454 18:23:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:47.454 { 00:05:47.454 "name": "Malloc1", 00:05:47.454 "aliases": [ 00:05:47.454 "4f28e8e0-914d-4ec3-9f7b-b047c3dbad6e" 00:05:47.454 ], 00:05:47.454 "product_name": "Malloc disk", 00:05:47.454 "block_size": 4096, 00:05:47.454 "num_blocks": 256, 00:05:47.454 "uuid": "4f28e8e0-914d-4ec3-9f7b-b047c3dbad6e", 00:05:47.454 "assigned_rate_limits": { 00:05:47.454 "rw_ios_per_sec": 0, 00:05:47.454 "rw_mbytes_per_sec": 0, 00:05:47.454 "r_mbytes_per_sec": 0, 00:05:47.454 "w_mbytes_per_sec": 0 00:05:47.454 }, 00:05:47.454 "claimed": false, 00:05:47.454 "zoned": false, 00:05:47.454 "supported_io_types": { 00:05:47.454 "read": true, 00:05:47.454 "write": true, 00:05:47.454 "unmap": true, 00:05:47.454 "flush": true, 00:05:47.454 "reset": true, 00:05:47.454 "nvme_admin": false, 00:05:47.454 "nvme_io": false, 00:05:47.454 "nvme_io_md": false, 00:05:47.454 "write_zeroes": true, 00:05:47.454 "zcopy": true, 00:05:47.454 "get_zone_info": false, 00:05:47.454 "zone_management": false, 00:05:47.454 "zone_append": false, 00:05:47.454 "compare": false, 00:05:47.454 "compare_and_write": false, 00:05:47.454 "abort": true, 00:05:47.454 "seek_hole": false, 00:05:47.454 "seek_data": false, 00:05:47.454 "copy": true, 00:05:47.454 "nvme_iov_md": false 00:05:47.454 }, 00:05:47.454 "memory_domains": [ 00:05:47.454 { 00:05:47.454 "dma_device_id": "system", 00:05:47.454 "dma_device_type": 1 00:05:47.454 }, 00:05:47.454 { 00:05:47.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.454 "dma_device_type": 2 00:05:47.454 } 00:05:47.454 ], 00:05:47.454 "driver_specific": {} 00:05:47.454 } 00:05:47.454 ]' 00:05:47.454 18:23:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:47.712 18:23:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:47.712 18:23:05 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:47.712 18:23:05 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.712 18:23:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:47.712 18:23:05 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.712 18:23:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:47.712 18:23:05 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.712 18:23:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:47.712 18:23:05 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.712 18:23:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:47.712 18:23:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:47.712 18:23:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:47.712 00:05:47.712 real 0m0.155s 00:05:47.712 user 0m0.105s 00:05:47.712 sys 0m0.016s 00:05:47.712 ************************************ 00:05:47.712 END TEST rpc_plugins 00:05:47.712 ************************************ 00:05:47.712 18:23:05 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.712 18:23:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:47.712 18:23:05 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:47.712 18:23:05 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.713 18:23:05 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.713 18:23:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.713 ************************************ 00:05:47.713 START TEST rpc_trace_cmd_test 00:05:47.713 ************************************ 00:05:47.713 18:23:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:47.713 18:23:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:47.713 18:23:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:47.713 18:23:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.713 18:23:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:47.713 18:23:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.713 18:23:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:47.713 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid68822", 00:05:47.713 "tpoint_group_mask": "0x8", 00:05:47.713 "iscsi_conn": { 00:05:47.713 "mask": "0x2", 00:05:47.713 "tpoint_mask": "0x0" 00:05:47.713 }, 00:05:47.713 "scsi": { 00:05:47.713 "mask": "0x4", 00:05:47.713 "tpoint_mask": "0x0" 00:05:47.713 }, 00:05:47.713 "bdev": { 00:05:47.713 "mask": "0x8", 00:05:47.713 "tpoint_mask": "0xffffffffffffffff" 00:05:47.713 }, 00:05:47.713 "nvmf_rdma": { 00:05:47.713 "mask": "0x10", 00:05:47.713 "tpoint_mask": "0x0" 00:05:47.713 }, 00:05:47.713 "nvmf_tcp": { 00:05:47.713 "mask": "0x20", 00:05:47.713 "tpoint_mask": "0x0" 00:05:47.713 }, 00:05:47.713 "ftl": { 00:05:47.713 "mask": "0x40", 00:05:47.713 "tpoint_mask": "0x0" 00:05:47.713 }, 00:05:47.713 "blobfs": { 00:05:47.713 "mask": "0x80", 00:05:47.713 "tpoint_mask": "0x0" 00:05:47.713 }, 00:05:47.713 "dsa": { 00:05:47.713 "mask": "0x200", 00:05:47.713 "tpoint_mask": "0x0" 00:05:47.713 }, 00:05:47.713 "thread": { 00:05:47.713 "mask": "0x400", 00:05:47.713 "tpoint_mask": "0x0" 00:05:47.713 }, 00:05:47.713 "nvme_pcie": { 00:05:47.713 "mask": "0x800", 00:05:47.713 "tpoint_mask": "0x0" 00:05:47.713 }, 00:05:47.713 "iaa": { 00:05:47.713 "mask": "0x1000", 00:05:47.713 "tpoint_mask": "0x0" 00:05:47.713 }, 00:05:47.713 "nvme_tcp": { 00:05:47.713 "mask": "0x2000", 00:05:47.713 "tpoint_mask": "0x0" 00:05:47.713 }, 00:05:47.713 "bdev_nvme": { 00:05:47.713 "mask": "0x4000", 00:05:47.713 "tpoint_mask": "0x0" 00:05:47.713 }, 00:05:47.713 "sock": { 00:05:47.713 "mask": "0x8000", 00:05:47.713 "tpoint_mask": "0x0" 00:05:47.713 }, 00:05:47.713 "blob": { 00:05:47.713 "mask": "0x10000", 00:05:47.713 "tpoint_mask": "0x0" 00:05:47.713 }, 00:05:47.713 "bdev_raid": { 00:05:47.713 "mask": "0x20000", 00:05:47.713 "tpoint_mask": "0x0" 00:05:47.713 } 00:05:47.713 }' 00:05:47.713 18:23:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:47.713 18:23:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:47.713 18:23:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:47.713 18:23:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:47.713 18:23:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:47.971 18:23:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:47.971 18:23:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:47.971 18:23:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:47.971 18:23:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:47.971 18:23:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:47.971 00:05:47.971 real 0m0.280s 00:05:47.971 user 0m0.246s 00:05:47.971 sys 0m0.025s 00:05:47.971 ************************************ 00:05:47.971 END TEST rpc_trace_cmd_test 00:05:47.971 ************************************ 00:05:47.971 18:23:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.971 18:23:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:47.971 18:23:05 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:47.971 18:23:05 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:47.971 18:23:05 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:47.971 18:23:05 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.971 18:23:05 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.971 18:23:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.971 ************************************ 00:05:47.971 START TEST rpc_daemon_integrity 00:05:47.971 ************************************ 00:05:47.971 18:23:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:47.971 18:23:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:47.971 18:23:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.971 18:23:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.971 18:23:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.971 18:23:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:47.971 18:23:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:48.231 18:23:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:48.231 18:23:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:48.231 18:23:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.231 18:23:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.231 18:23:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.231 18:23:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:48.231 18:23:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:48.231 18:23:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.231 18:23:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.231 18:23:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.231 18:23:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:48.231 { 00:05:48.231 "name": "Malloc2", 00:05:48.231 "aliases": [ 00:05:48.231 "746a73ed-db0a-4a97-b85e-b58b37ee02a3" 00:05:48.231 ], 00:05:48.231 "product_name": "Malloc disk", 00:05:48.231 "block_size": 512, 00:05:48.231 "num_blocks": 16384, 00:05:48.231 "uuid": "746a73ed-db0a-4a97-b85e-b58b37ee02a3", 00:05:48.231 "assigned_rate_limits": { 00:05:48.231 "rw_ios_per_sec": 0, 00:05:48.231 "rw_mbytes_per_sec": 0, 00:05:48.231 "r_mbytes_per_sec": 0, 00:05:48.231 "w_mbytes_per_sec": 0 00:05:48.231 }, 00:05:48.231 "claimed": false, 00:05:48.231 "zoned": false, 00:05:48.231 "supported_io_types": { 00:05:48.231 "read": true, 00:05:48.231 "write": true, 00:05:48.231 "unmap": true, 00:05:48.231 "flush": true, 00:05:48.231 "reset": true, 00:05:48.231 "nvme_admin": false, 00:05:48.231 "nvme_io": false, 00:05:48.231 "nvme_io_md": false, 00:05:48.231 "write_zeroes": true, 00:05:48.231 "zcopy": true, 00:05:48.231 "get_zone_info": false, 00:05:48.231 "zone_management": false, 00:05:48.231 "zone_append": false, 00:05:48.231 "compare": false, 00:05:48.231 "compare_and_write": false, 00:05:48.231 "abort": true, 00:05:48.231 "seek_hole": false, 00:05:48.231 "seek_data": false, 00:05:48.231 "copy": true, 00:05:48.231 "nvme_iov_md": false 00:05:48.231 }, 00:05:48.231 "memory_domains": [ 00:05:48.231 { 00:05:48.231 "dma_device_id": "system", 00:05:48.231 "dma_device_type": 1 00:05:48.231 }, 00:05:48.231 { 00:05:48.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.231 "dma_device_type": 2 00:05:48.231 } 00:05:48.231 ], 00:05:48.231 "driver_specific": {} 00:05:48.231 } 00:05:48.231 ]' 00:05:48.231 18:23:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:48.231 18:23:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:48.231 18:23:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:48.231 18:23:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.231 18:23:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.231 [2024-12-08 18:23:05.997633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:48.231 [2024-12-08 18:23:05.997692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:48.231 [2024-12-08 18:23:05.997709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1eec4b0 00:05:48.231 [2024-12-08 18:23:05.997717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:48.231 [2024-12-08 18:23:05.999200] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:48.231 [2024-12-08 18:23:05.999237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:48.231 Passthru0 00:05:48.231 18:23:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.231 18:23:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:48.231 18:23:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.231 18:23:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.231 18:23:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.231 18:23:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:48.231 { 00:05:48.231 "name": "Malloc2", 00:05:48.231 "aliases": [ 00:05:48.231 "746a73ed-db0a-4a97-b85e-b58b37ee02a3" 00:05:48.231 ], 00:05:48.231 "product_name": "Malloc disk", 00:05:48.231 "block_size": 512, 00:05:48.231 "num_blocks": 16384, 00:05:48.231 "uuid": "746a73ed-db0a-4a97-b85e-b58b37ee02a3", 00:05:48.231 "assigned_rate_limits": { 00:05:48.231 "rw_ios_per_sec": 0, 00:05:48.231 "rw_mbytes_per_sec": 0, 00:05:48.231 "r_mbytes_per_sec": 0, 00:05:48.231 "w_mbytes_per_sec": 0 00:05:48.231 }, 00:05:48.231 "claimed": true, 00:05:48.231 "claim_type": "exclusive_write", 00:05:48.231 "zoned": false, 00:05:48.231 "supported_io_types": { 00:05:48.231 "read": true, 00:05:48.231 "write": true, 00:05:48.231 "unmap": true, 00:05:48.231 "flush": true, 00:05:48.231 "reset": true, 00:05:48.231 "nvme_admin": false, 00:05:48.231 "nvme_io": false, 00:05:48.231 "nvme_io_md": false, 00:05:48.231 "write_zeroes": true, 00:05:48.231 "zcopy": true, 00:05:48.231 "get_zone_info": false, 00:05:48.231 "zone_management": false, 00:05:48.231 "zone_append": false, 00:05:48.231 "compare": false, 00:05:48.231 "compare_and_write": false, 00:05:48.231 "abort": true, 00:05:48.231 "seek_hole": false, 00:05:48.231 "seek_data": false, 00:05:48.231 "copy": true, 00:05:48.231 "nvme_iov_md": false 00:05:48.231 }, 00:05:48.231 "memory_domains": [ 00:05:48.231 { 00:05:48.231 "dma_device_id": "system", 00:05:48.231 "dma_device_type": 1 00:05:48.231 }, 00:05:48.231 { 00:05:48.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.231 "dma_device_type": 2 00:05:48.231 } 00:05:48.231 ], 00:05:48.231 "driver_specific": {} 00:05:48.231 }, 00:05:48.231 { 00:05:48.231 "name": "Passthru0", 00:05:48.231 "aliases": [ 00:05:48.231 "905429eb-fcaa-5502-871a-a0a3a4e14c8a" 00:05:48.231 ], 00:05:48.231 "product_name": "passthru", 00:05:48.231 "block_size": 512, 00:05:48.231 "num_blocks": 16384, 00:05:48.231 "uuid": "905429eb-fcaa-5502-871a-a0a3a4e14c8a", 00:05:48.231 "assigned_rate_limits": { 00:05:48.231 "rw_ios_per_sec": 0, 00:05:48.231 "rw_mbytes_per_sec": 0, 00:05:48.231 "r_mbytes_per_sec": 0, 00:05:48.231 "w_mbytes_per_sec": 0 00:05:48.231 }, 00:05:48.231 "claimed": false, 00:05:48.231 "zoned": false, 00:05:48.231 "supported_io_types": { 00:05:48.231 "read": true, 00:05:48.231 "write": true, 00:05:48.231 "unmap": true, 00:05:48.231 "flush": true, 00:05:48.231 "reset": true, 00:05:48.231 "nvme_admin": false, 00:05:48.231 "nvme_io": false, 00:05:48.231 "nvme_io_md": false, 00:05:48.231 "write_zeroes": true, 00:05:48.231 "zcopy": true, 00:05:48.231 "get_zone_info": false, 00:05:48.231 "zone_management": false, 00:05:48.231 "zone_append": false, 00:05:48.231 "compare": false, 00:05:48.231 "compare_and_write": false, 00:05:48.231 "abort": true, 00:05:48.231 "seek_hole": false, 00:05:48.231 "seek_data": false, 00:05:48.231 "copy": true, 00:05:48.231 "nvme_iov_md": false 00:05:48.231 }, 00:05:48.231 "memory_domains": [ 00:05:48.231 { 00:05:48.231 "dma_device_id": "system", 00:05:48.231 "dma_device_type": 1 00:05:48.231 }, 00:05:48.231 { 00:05:48.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.231 "dma_device_type": 2 00:05:48.231 } 00:05:48.231 ], 00:05:48.231 "driver_specific": { 00:05:48.231 "passthru": { 00:05:48.231 "name": "Passthru0", 00:05:48.231 "base_bdev_name": "Malloc2" 00:05:48.231 } 00:05:48.231 } 00:05:48.231 } 00:05:48.232 ]' 00:05:48.232 18:23:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:48.232 18:23:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:48.232 18:23:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:48.232 18:23:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.232 18:23:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.232 18:23:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.232 18:23:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:48.232 18:23:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.232 18:23:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.232 18:23:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.232 18:23:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:48.232 18:23:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.232 18:23:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.232 18:23:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.232 18:23:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:48.232 18:23:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:48.232 18:23:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:48.232 00:05:48.232 real 0m0.318s 00:05:48.232 user 0m0.225s 00:05:48.232 sys 0m0.030s 00:05:48.491 ************************************ 00:05:48.491 END TEST rpc_daemon_integrity 00:05:48.491 18:23:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.491 18:23:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.491 ************************************ 00:05:48.491 18:23:06 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:48.491 18:23:06 rpc -- rpc/rpc.sh@84 -- # killprocess 68822 00:05:48.491 18:23:06 rpc -- common/autotest_common.sh@950 -- # '[' -z 68822 ']' 00:05:48.491 18:23:06 rpc -- common/autotest_common.sh@954 -- # kill -0 68822 00:05:48.491 18:23:06 rpc -- common/autotest_common.sh@955 -- # uname 00:05:48.491 18:23:06 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:48.491 18:23:06 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68822 00:05:48.491 18:23:06 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:48.491 killing process with pid 68822 00:05:48.491 18:23:06 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:48.491 18:23:06 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68822' 00:05:48.491 18:23:06 rpc -- common/autotest_common.sh@969 -- # kill 68822 00:05:48.491 18:23:06 rpc -- common/autotest_common.sh@974 -- # wait 68822 00:05:48.750 00:05:48.750 real 0m2.356s 00:05:48.750 user 0m3.040s 00:05:48.750 sys 0m0.643s 00:05:48.750 ************************************ 00:05:48.750 END TEST rpc 00:05:48.750 ************************************ 00:05:48.750 18:23:06 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.750 18:23:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.750 18:23:06 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:48.750 18:23:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.750 18:23:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.750 18:23:06 -- common/autotest_common.sh@10 -- # set +x 00:05:48.750 ************************************ 00:05:48.750 START TEST skip_rpc 00:05:48.750 ************************************ 00:05:48.750 18:23:06 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:49.010 * Looking for test storage... 00:05:49.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:49.010 18:23:06 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:49.010 18:23:06 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:49.010 18:23:06 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:49.010 18:23:06 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:49.010 18:23:06 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.010 18:23:06 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.010 18:23:06 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.010 18:23:06 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.010 18:23:06 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.010 18:23:06 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.010 18:23:06 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.010 18:23:06 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.010 18:23:06 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.010 18:23:06 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.010 18:23:06 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.010 18:23:06 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:49.010 18:23:06 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:49.010 18:23:06 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.010 18:23:06 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.010 18:23:06 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:49.010 18:23:06 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:49.010 18:23:06 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.010 18:23:06 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:49.010 18:23:06 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.010 18:23:06 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:49.010 18:23:06 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:49.010 18:23:06 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.010 18:23:06 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:49.010 18:23:06 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.010 18:23:06 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.010 18:23:06 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.010 18:23:06 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:49.010 18:23:06 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.010 18:23:06 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:49.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.010 --rc genhtml_branch_coverage=1 00:05:49.010 --rc genhtml_function_coverage=1 00:05:49.010 --rc genhtml_legend=1 00:05:49.010 --rc geninfo_all_blocks=1 00:05:49.010 --rc geninfo_unexecuted_blocks=1 00:05:49.010 00:05:49.010 ' 00:05:49.010 18:23:06 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:49.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.010 --rc genhtml_branch_coverage=1 00:05:49.010 --rc genhtml_function_coverage=1 00:05:49.010 --rc genhtml_legend=1 00:05:49.010 --rc geninfo_all_blocks=1 00:05:49.010 --rc geninfo_unexecuted_blocks=1 00:05:49.010 00:05:49.010 ' 00:05:49.010 18:23:06 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:49.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.010 --rc genhtml_branch_coverage=1 00:05:49.010 --rc genhtml_function_coverage=1 00:05:49.010 --rc genhtml_legend=1 00:05:49.010 --rc geninfo_all_blocks=1 00:05:49.010 --rc geninfo_unexecuted_blocks=1 00:05:49.010 00:05:49.010 ' 00:05:49.010 18:23:06 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:49.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.010 --rc genhtml_branch_coverage=1 00:05:49.010 --rc genhtml_function_coverage=1 00:05:49.010 --rc genhtml_legend=1 00:05:49.010 --rc geninfo_all_blocks=1 00:05:49.010 --rc geninfo_unexecuted_blocks=1 00:05:49.010 00:05:49.010 ' 00:05:49.010 18:23:06 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:49.010 18:23:06 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:49.010 18:23:06 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:49.010 18:23:06 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.010 18:23:06 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.010 18:23:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.010 ************************************ 00:05:49.010 START TEST skip_rpc 00:05:49.010 ************************************ 00:05:49.010 18:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:49.010 18:23:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69020 00:05:49.010 18:23:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.010 18:23:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:49.010 18:23:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:49.010 [2024-12-08 18:23:06.889659] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:49.010 [2024-12-08 18:23:06.889761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69020 ] 00:05:49.269 [2024-12-08 18:23:07.027337] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.269 [2024-12-08 18:23:07.088220] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.269 [2024-12-08 18:23:07.150604] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:54.536 18:23:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:54.536 18:23:11 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:54.536 18:23:11 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:54.536 18:23:11 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:54.536 18:23:11 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.536 18:23:11 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:54.536 18:23:11 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.536 18:23:11 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:54.536 18:23:11 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.536 18:23:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.536 18:23:11 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:54.536 18:23:11 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:54.536 18:23:11 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:54.536 18:23:11 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:54.536 18:23:11 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:54.536 18:23:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:54.536 18:23:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69020 00:05:54.536 18:23:11 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69020 ']' 00:05:54.536 18:23:11 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69020 00:05:54.536 18:23:11 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:54.536 18:23:11 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.536 18:23:11 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69020 00:05:54.536 18:23:11 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:54.536 killing process with pid 69020 00:05:54.536 18:23:11 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:54.536 18:23:11 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69020' 00:05:54.536 18:23:11 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69020 00:05:54.536 18:23:11 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69020 00:05:54.536 00:05:54.536 real 0m5.409s 00:05:54.536 user 0m5.039s 00:05:54.536 sys 0m0.285s 00:05:54.537 18:23:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.537 18:23:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.537 ************************************ 00:05:54.537 END TEST skip_rpc 00:05:54.537 ************************************ 00:05:54.537 18:23:12 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:54.537 18:23:12 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.537 18:23:12 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.537 18:23:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.537 ************************************ 00:05:54.537 START TEST skip_rpc_with_json 00:05:54.537 ************************************ 00:05:54.537 18:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:54.537 18:23:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:54.537 18:23:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69101 00:05:54.537 18:23:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.537 18:23:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69101 00:05:54.537 18:23:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.537 18:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69101 ']' 00:05:54.537 18:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.537 18:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.537 18:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.537 18:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.537 18:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.537 [2024-12-08 18:23:12.332455] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:54.537 [2024-12-08 18:23:12.332544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69101 ] 00:05:54.537 [2024-12-08 18:23:12.463471] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.829 [2024-12-08 18:23:12.530643] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.829 [2024-12-08 18:23:12.594799] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.109 18:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.109 18:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:55.109 18:23:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:55.109 18:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.109 18:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.109 [2024-12-08 18:23:12.783134] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:55.109 request: 00:05:55.109 { 00:05:55.109 "trtype": "tcp", 00:05:55.109 "method": "nvmf_get_transports", 00:05:55.109 "req_id": 1 00:05:55.109 } 00:05:55.109 Got JSON-RPC error response 00:05:55.109 response: 00:05:55.109 { 00:05:55.109 "code": -19, 00:05:55.109 "message": "No such device" 00:05:55.109 } 00:05:55.109 18:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:55.109 18:23:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:55.109 18:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.109 18:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.109 [2024-12-08 18:23:12.795246] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:55.109 18:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.109 18:23:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:55.109 18:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.109 18:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.109 18:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.110 18:23:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:55.110 { 00:05:55.110 "subsystems": [ 00:05:55.110 { 00:05:55.110 "subsystem": "fsdev", 00:05:55.110 "config": [ 00:05:55.110 { 00:05:55.110 "method": "fsdev_set_opts", 00:05:55.110 "params": { 00:05:55.110 "fsdev_io_pool_size": 65535, 00:05:55.110 "fsdev_io_cache_size": 256 00:05:55.110 } 00:05:55.110 } 00:05:55.110 ] 00:05:55.110 }, 00:05:55.110 { 00:05:55.110 "subsystem": "keyring", 00:05:55.110 "config": [] 00:05:55.110 }, 00:05:55.110 { 00:05:55.110 "subsystem": "iobuf", 00:05:55.110 "config": [ 00:05:55.110 { 00:05:55.110 "method": "iobuf_set_options", 00:05:55.110 "params": { 00:05:55.110 "small_pool_count": 8192, 00:05:55.110 "large_pool_count": 1024, 00:05:55.110 "small_bufsize": 8192, 00:05:55.110 "large_bufsize": 135168 00:05:55.110 } 00:05:55.110 } 00:05:55.110 ] 00:05:55.110 }, 00:05:55.110 { 00:05:55.110 "subsystem": "sock", 00:05:55.110 "config": [ 00:05:55.110 { 00:05:55.110 "method": "sock_set_default_impl", 00:05:55.110 "params": { 00:05:55.110 "impl_name": "uring" 00:05:55.110 } 00:05:55.110 }, 00:05:55.110 { 00:05:55.110 "method": "sock_impl_set_options", 00:05:55.110 "params": { 00:05:55.110 "impl_name": "ssl", 00:05:55.110 "recv_buf_size": 4096, 00:05:55.110 "send_buf_size": 4096, 00:05:55.110 "enable_recv_pipe": true, 00:05:55.110 "enable_quickack": false, 00:05:55.110 "enable_placement_id": 0, 00:05:55.110 "enable_zerocopy_send_server": true, 00:05:55.110 "enable_zerocopy_send_client": false, 00:05:55.110 "zerocopy_threshold": 0, 00:05:55.110 "tls_version": 0, 00:05:55.110 "enable_ktls": false 00:05:55.110 } 00:05:55.110 }, 00:05:55.110 { 00:05:55.110 "method": "sock_impl_set_options", 00:05:55.110 "params": { 00:05:55.110 "impl_name": "posix", 00:05:55.110 "recv_buf_size": 2097152, 00:05:55.110 "send_buf_size": 2097152, 00:05:55.110 "enable_recv_pipe": true, 00:05:55.110 "enable_quickack": false, 00:05:55.110 "enable_placement_id": 0, 00:05:55.110 "enable_zerocopy_send_server": true, 00:05:55.110 "enable_zerocopy_send_client": false, 00:05:55.110 "zerocopy_threshold": 0, 00:05:55.110 "tls_version": 0, 00:05:55.110 "enable_ktls": false 00:05:55.110 } 00:05:55.110 }, 00:05:55.110 { 00:05:55.110 "method": "sock_impl_set_options", 00:05:55.110 "params": { 00:05:55.110 "impl_name": "uring", 00:05:55.110 "recv_buf_size": 2097152, 00:05:55.110 "send_buf_size": 2097152, 00:05:55.110 "enable_recv_pipe": true, 00:05:55.110 "enable_quickack": false, 00:05:55.110 "enable_placement_id": 0, 00:05:55.110 "enable_zerocopy_send_server": false, 00:05:55.110 "enable_zerocopy_send_client": false, 00:05:55.110 "zerocopy_threshold": 0, 00:05:55.110 "tls_version": 0, 00:05:55.110 "enable_ktls": false 00:05:55.110 } 00:05:55.110 } 00:05:55.110 ] 00:05:55.110 }, 00:05:55.110 { 00:05:55.110 "subsystem": "vmd", 00:05:55.110 "config": [] 00:05:55.110 }, 00:05:55.110 { 00:05:55.110 "subsystem": "accel", 00:05:55.110 "config": [ 00:05:55.110 { 00:05:55.110 "method": "accel_set_options", 00:05:55.110 "params": { 00:05:55.110 "small_cache_size": 128, 00:05:55.110 "large_cache_size": 16, 00:05:55.110 "task_count": 2048, 00:05:55.110 "sequence_count": 2048, 00:05:55.110 "buf_count": 2048 00:05:55.110 } 00:05:55.110 } 00:05:55.110 ] 00:05:55.110 }, 00:05:55.110 { 00:05:55.110 "subsystem": "bdev", 00:05:55.110 "config": [ 00:05:55.110 { 00:05:55.110 "method": "bdev_set_options", 00:05:55.110 "params": { 00:05:55.110 "bdev_io_pool_size": 65535, 00:05:55.110 "bdev_io_cache_size": 256, 00:05:55.110 "bdev_auto_examine": true, 00:05:55.110 "iobuf_small_cache_size": 128, 00:05:55.110 "iobuf_large_cache_size": 16 00:05:55.110 } 00:05:55.110 }, 00:05:55.110 { 00:05:55.110 "method": "bdev_raid_set_options", 00:05:55.110 "params": { 00:05:55.110 "process_window_size_kb": 1024, 00:05:55.110 "process_max_bandwidth_mb_sec": 0 00:05:55.110 } 00:05:55.110 }, 00:05:55.110 { 00:05:55.110 "method": "bdev_iscsi_set_options", 00:05:55.110 "params": { 00:05:55.110 "timeout_sec": 30 00:05:55.110 } 00:05:55.110 }, 00:05:55.110 { 00:05:55.110 "method": "bdev_nvme_set_options", 00:05:55.110 "params": { 00:05:55.110 "action_on_timeout": "none", 00:05:55.110 "timeout_us": 0, 00:05:55.110 "timeout_admin_us": 0, 00:05:55.110 "keep_alive_timeout_ms": 10000, 00:05:55.110 "arbitration_burst": 0, 00:05:55.110 "low_priority_weight": 0, 00:05:55.110 "medium_priority_weight": 0, 00:05:55.110 "high_priority_weight": 0, 00:05:55.110 "nvme_adminq_poll_period_us": 10000, 00:05:55.110 "nvme_ioq_poll_period_us": 0, 00:05:55.110 "io_queue_requests": 0, 00:05:55.110 "delay_cmd_submit": true, 00:05:55.110 "transport_retry_count": 4, 00:05:55.110 "bdev_retry_count": 3, 00:05:55.110 "transport_ack_timeout": 0, 00:05:55.110 "ctrlr_loss_timeout_sec": 0, 00:05:55.110 "reconnect_delay_sec": 0, 00:05:55.110 "fast_io_fail_timeout_sec": 0, 00:05:55.110 "disable_auto_failback": false, 00:05:55.110 "generate_uuids": false, 00:05:55.110 "transport_tos": 0, 00:05:55.110 "nvme_error_stat": false, 00:05:55.110 "rdma_srq_size": 0, 00:05:55.110 "io_path_stat": false, 00:05:55.110 "allow_accel_sequence": false, 00:05:55.110 "rdma_max_cq_size": 0, 00:05:55.110 "rdma_cm_event_timeout_ms": 0, 00:05:55.110 "dhchap_digests": [ 00:05:55.110 "sha256", 00:05:55.110 "sha384", 00:05:55.110 "sha512" 00:05:55.110 ], 00:05:55.110 "dhchap_dhgroups": [ 00:05:55.110 "null", 00:05:55.110 "ffdhe2048", 00:05:55.110 "ffdhe3072", 00:05:55.110 "ffdhe4096", 00:05:55.110 "ffdhe6144", 00:05:55.110 "ffdhe8192" 00:05:55.110 ] 00:05:55.110 } 00:05:55.110 }, 00:05:55.110 { 00:05:55.110 "method": "bdev_nvme_set_hotplug", 00:05:55.110 "params": { 00:05:55.111 "period_us": 100000, 00:05:55.111 "enable": false 00:05:55.111 } 00:05:55.111 }, 00:05:55.111 { 00:05:55.111 "method": "bdev_wait_for_examine" 00:05:55.111 } 00:05:55.111 ] 00:05:55.111 }, 00:05:55.111 { 00:05:55.111 "subsystem": "scsi", 00:05:55.111 "config": null 00:05:55.111 }, 00:05:55.111 { 00:05:55.111 "subsystem": "scheduler", 00:05:55.111 "config": [ 00:05:55.111 { 00:05:55.111 "method": "framework_set_scheduler", 00:05:55.111 "params": { 00:05:55.111 "name": "static" 00:05:55.111 } 00:05:55.111 } 00:05:55.111 ] 00:05:55.111 }, 00:05:55.111 { 00:05:55.111 "subsystem": "vhost_scsi", 00:05:55.111 "config": [] 00:05:55.111 }, 00:05:55.111 { 00:05:55.111 "subsystem": "vhost_blk", 00:05:55.111 "config": [] 00:05:55.111 }, 00:05:55.111 { 00:05:55.111 "subsystem": "ublk", 00:05:55.111 "config": [] 00:05:55.111 }, 00:05:55.111 { 00:05:55.111 "subsystem": "nbd", 00:05:55.111 "config": [] 00:05:55.111 }, 00:05:55.111 { 00:05:55.111 "subsystem": "nvmf", 00:05:55.111 "config": [ 00:05:55.111 { 00:05:55.111 "method": "nvmf_set_config", 00:05:55.111 "params": { 00:05:55.111 "discovery_filter": "match_any", 00:05:55.111 "admin_cmd_passthru": { 00:05:55.111 "identify_ctrlr": false 00:05:55.111 }, 00:05:55.111 "dhchap_digests": [ 00:05:55.111 "sha256", 00:05:55.111 "sha384", 00:05:55.111 "sha512" 00:05:55.111 ], 00:05:55.111 "dhchap_dhgroups": [ 00:05:55.111 "null", 00:05:55.111 "ffdhe2048", 00:05:55.111 "ffdhe3072", 00:05:55.111 "ffdhe4096", 00:05:55.111 "ffdhe6144", 00:05:55.111 "ffdhe8192" 00:05:55.111 ] 00:05:55.111 } 00:05:55.111 }, 00:05:55.111 { 00:05:55.111 "method": "nvmf_set_max_subsystems", 00:05:55.111 "params": { 00:05:55.111 "max_subsystems": 1024 00:05:55.111 } 00:05:55.111 }, 00:05:55.111 { 00:05:55.111 "method": "nvmf_set_crdt", 00:05:55.111 "params": { 00:05:55.111 "crdt1": 0, 00:05:55.111 "crdt2": 0, 00:05:55.111 "crdt3": 0 00:05:55.111 } 00:05:55.111 }, 00:05:55.111 { 00:05:55.111 "method": "nvmf_create_transport", 00:05:55.111 "params": { 00:05:55.111 "trtype": "TCP", 00:05:55.111 "max_queue_depth": 128, 00:05:55.111 "max_io_qpairs_per_ctrlr": 127, 00:05:55.111 "in_capsule_data_size": 4096, 00:05:55.111 "max_io_size": 131072, 00:05:55.111 "io_unit_size": 131072, 00:05:55.111 "max_aq_depth": 128, 00:05:55.111 "num_shared_buffers": 511, 00:05:55.111 "buf_cache_size": 4294967295, 00:05:55.111 "dif_insert_or_strip": false, 00:05:55.111 "zcopy": false, 00:05:55.111 "c2h_success": true, 00:05:55.111 "sock_priority": 0, 00:05:55.111 "abort_timeout_sec": 1, 00:05:55.111 "ack_timeout": 0, 00:05:55.111 "data_wr_pool_size": 0 00:05:55.111 } 00:05:55.111 } 00:05:55.111 ] 00:05:55.111 }, 00:05:55.111 { 00:05:55.111 "subsystem": "iscsi", 00:05:55.111 "config": [ 00:05:55.111 { 00:05:55.111 "method": "iscsi_set_options", 00:05:55.111 "params": { 00:05:55.111 "node_base": "iqn.2016-06.io.spdk", 00:05:55.111 "max_sessions": 128, 00:05:55.111 "max_connections_per_session": 2, 00:05:55.111 "max_queue_depth": 64, 00:05:55.111 "default_time2wait": 2, 00:05:55.111 "default_time2retain": 20, 00:05:55.111 "first_burst_length": 8192, 00:05:55.111 "immediate_data": true, 00:05:55.111 "allow_duplicated_isid": false, 00:05:55.111 "error_recovery_level": 0, 00:05:55.111 "nop_timeout": 60, 00:05:55.111 "nop_in_interval": 30, 00:05:55.111 "disable_chap": false, 00:05:55.111 "require_chap": false, 00:05:55.111 "mutual_chap": false, 00:05:55.111 "chap_group": 0, 00:05:55.111 "max_large_datain_per_connection": 64, 00:05:55.111 "max_r2t_per_connection": 4, 00:05:55.111 "pdu_pool_size": 36864, 00:05:55.111 "immediate_data_pool_size": 16384, 00:05:55.111 "data_out_pool_size": 2048 00:05:55.111 } 00:05:55.111 } 00:05:55.111 ] 00:05:55.111 } 00:05:55.111 ] 00:05:55.111 } 00:05:55.111 18:23:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:55.111 18:23:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69101 00:05:55.111 18:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69101 ']' 00:05:55.111 18:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69101 00:05:55.111 18:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:55.111 18:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.111 18:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69101 00:05:55.111 18:23:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.111 killing process with pid 69101 00:05:55.111 18:23:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.111 18:23:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69101' 00:05:55.111 18:23:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69101 00:05:55.111 18:23:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69101 00:05:55.684 18:23:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69127 00:05:55.684 18:23:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:55.684 18:23:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:00.976 18:23:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69127 00:06:00.976 18:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69127 ']' 00:06:00.976 18:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69127 00:06:00.976 18:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:00.976 18:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.976 18:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69127 00:06:00.976 18:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.976 18:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.976 killing process with pid 69127 00:06:00.976 18:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69127' 00:06:00.976 18:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69127 00:06:00.976 18:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69127 00:06:00.976 18:23:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:00.976 18:23:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:00.976 00:06:00.976 real 0m6.545s 00:06:00.976 user 0m6.063s 00:06:00.976 sys 0m0.639s 00:06:00.976 18:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.976 18:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:00.976 ************************************ 00:06:00.976 END TEST skip_rpc_with_json 00:06:00.976 ************************************ 00:06:00.976 18:23:18 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:00.976 18:23:18 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.976 18:23:18 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.976 18:23:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.976 ************************************ 00:06:00.976 START TEST skip_rpc_with_delay 00:06:00.976 ************************************ 00:06:00.976 18:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:00.976 18:23:18 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:00.976 18:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:00.976 18:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:00.976 18:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.976 18:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.976 18:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.976 18:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.976 18:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.976 18:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.976 18:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.976 18:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:00.976 18:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:01.235 [2024-12-08 18:23:18.936936] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:01.235 [2024-12-08 18:23:18.937053] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:01.235 18:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:01.235 18:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:01.235 18:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:01.235 18:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:01.235 00:06:01.235 real 0m0.077s 00:06:01.235 user 0m0.043s 00:06:01.235 sys 0m0.033s 00:06:01.235 18:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.235 18:23:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:01.235 ************************************ 00:06:01.235 END TEST skip_rpc_with_delay 00:06:01.235 ************************************ 00:06:01.235 18:23:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:01.235 18:23:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:01.235 18:23:18 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:01.235 18:23:18 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.235 18:23:18 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.235 18:23:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.235 ************************************ 00:06:01.235 START TEST exit_on_failed_rpc_init 00:06:01.235 ************************************ 00:06:01.235 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:01.235 18:23:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69236 00:06:01.235 18:23:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69236 00:06:01.235 18:23:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.235 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69236 ']' 00:06:01.235 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.235 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.235 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.235 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.235 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:01.235 [2024-12-08 18:23:19.072040] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:01.235 [2024-12-08 18:23:19.072146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69236 ] 00:06:01.493 [2024-12-08 18:23:19.205461] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.494 [2024-12-08 18:23:19.288157] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.494 [2024-12-08 18:23:19.353223] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.752 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.752 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:01.752 18:23:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.752 18:23:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:01.752 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:01.752 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:01.752 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.752 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.752 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.752 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.752 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.752 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.752 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.752 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:01.752 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:01.752 [2024-12-08 18:23:19.607862] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:01.752 [2024-12-08 18:23:19.607991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69247 ] 00:06:02.012 [2024-12-08 18:23:19.748342] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.012 [2024-12-08 18:23:19.825371] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.012 [2024-12-08 18:23:19.825495] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:02.012 [2024-12-08 18:23:19.825513] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:02.012 [2024-12-08 18:23:19.825525] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:02.012 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:02.012 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:02.012 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:02.012 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:02.012 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:02.012 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:02.012 18:23:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:02.012 18:23:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69236 00:06:02.012 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69236 ']' 00:06:02.012 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69236 00:06:02.012 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:02.012 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.012 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69236 00:06:02.271 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.271 killing process with pid 69236 00:06:02.271 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.271 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69236' 00:06:02.271 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69236 00:06:02.271 18:23:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69236 00:06:02.530 00:06:02.530 real 0m1.325s 00:06:02.530 user 0m1.437s 00:06:02.530 sys 0m0.403s 00:06:02.530 ************************************ 00:06:02.530 END TEST exit_on_failed_rpc_init 00:06:02.530 ************************************ 00:06:02.530 18:23:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.530 18:23:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:02.530 18:23:20 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:02.530 00:06:02.530 real 0m13.732s 00:06:02.530 user 0m12.756s 00:06:02.530 sys 0m1.560s 00:06:02.530 18:23:20 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.530 18:23:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.530 ************************************ 00:06:02.530 END TEST skip_rpc 00:06:02.530 ************************************ 00:06:02.530 18:23:20 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:02.530 18:23:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.530 18:23:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.530 18:23:20 -- common/autotest_common.sh@10 -- # set +x 00:06:02.530 ************************************ 00:06:02.530 START TEST rpc_client 00:06:02.530 ************************************ 00:06:02.530 18:23:20 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:02.790 * Looking for test storage... 00:06:02.790 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:02.790 18:23:20 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:02.790 18:23:20 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:02.790 18:23:20 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:02.790 18:23:20 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:02.790 18:23:20 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.790 18:23:20 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.790 18:23:20 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.790 18:23:20 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.790 18:23:20 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.790 18:23:20 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.790 18:23:20 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.790 18:23:20 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.790 18:23:20 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.790 18:23:20 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.790 18:23:20 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.790 18:23:20 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:02.790 18:23:20 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:02.790 18:23:20 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.790 18:23:20 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.790 18:23:20 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:02.790 18:23:20 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:02.790 18:23:20 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.790 18:23:20 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:02.790 18:23:20 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.790 18:23:20 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:02.790 18:23:20 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:02.790 18:23:20 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.790 18:23:20 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:02.790 18:23:20 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.790 18:23:20 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.790 18:23:20 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.790 18:23:20 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:02.790 18:23:20 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.790 18:23:20 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:02.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.790 --rc genhtml_branch_coverage=1 00:06:02.790 --rc genhtml_function_coverage=1 00:06:02.790 --rc genhtml_legend=1 00:06:02.790 --rc geninfo_all_blocks=1 00:06:02.790 --rc geninfo_unexecuted_blocks=1 00:06:02.790 00:06:02.790 ' 00:06:02.790 18:23:20 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:02.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.790 --rc genhtml_branch_coverage=1 00:06:02.790 --rc genhtml_function_coverage=1 00:06:02.790 --rc genhtml_legend=1 00:06:02.790 --rc geninfo_all_blocks=1 00:06:02.790 --rc geninfo_unexecuted_blocks=1 00:06:02.790 00:06:02.790 ' 00:06:02.790 18:23:20 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:02.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.790 --rc genhtml_branch_coverage=1 00:06:02.790 --rc genhtml_function_coverage=1 00:06:02.790 --rc genhtml_legend=1 00:06:02.790 --rc geninfo_all_blocks=1 00:06:02.790 --rc geninfo_unexecuted_blocks=1 00:06:02.790 00:06:02.790 ' 00:06:02.790 18:23:20 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:02.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.790 --rc genhtml_branch_coverage=1 00:06:02.790 --rc genhtml_function_coverage=1 00:06:02.790 --rc genhtml_legend=1 00:06:02.790 --rc geninfo_all_blocks=1 00:06:02.790 --rc geninfo_unexecuted_blocks=1 00:06:02.790 00:06:02.790 ' 00:06:02.790 18:23:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:02.790 OK 00:06:02.790 18:23:20 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:02.790 00:06:02.790 real 0m0.193s 00:06:02.790 user 0m0.108s 00:06:02.790 sys 0m0.098s 00:06:02.790 18:23:20 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.790 ************************************ 00:06:02.790 END TEST rpc_client 00:06:02.790 ************************************ 00:06:02.790 18:23:20 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:02.790 18:23:20 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:02.790 18:23:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.790 18:23:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.791 18:23:20 -- common/autotest_common.sh@10 -- # set +x 00:06:02.791 ************************************ 00:06:02.791 START TEST json_config 00:06:02.791 ************************************ 00:06:02.791 18:23:20 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:03.050 18:23:20 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:03.050 18:23:20 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:03.050 18:23:20 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:03.050 18:23:20 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:03.050 18:23:20 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.051 18:23:20 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.051 18:23:20 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.051 18:23:20 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.051 18:23:20 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.051 18:23:20 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.051 18:23:20 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.051 18:23:20 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.051 18:23:20 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.051 18:23:20 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.051 18:23:20 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.051 18:23:20 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:03.051 18:23:20 json_config -- scripts/common.sh@345 -- # : 1 00:06:03.051 18:23:20 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.051 18:23:20 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.051 18:23:20 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:03.051 18:23:20 json_config -- scripts/common.sh@353 -- # local d=1 00:06:03.051 18:23:20 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.051 18:23:20 json_config -- scripts/common.sh@355 -- # echo 1 00:06:03.051 18:23:20 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.051 18:23:20 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:03.051 18:23:20 json_config -- scripts/common.sh@353 -- # local d=2 00:06:03.051 18:23:20 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.051 18:23:20 json_config -- scripts/common.sh@355 -- # echo 2 00:06:03.051 18:23:20 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.051 18:23:20 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.051 18:23:20 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.051 18:23:20 json_config -- scripts/common.sh@368 -- # return 0 00:06:03.051 18:23:20 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.051 18:23:20 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:03.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.051 --rc genhtml_branch_coverage=1 00:06:03.051 --rc genhtml_function_coverage=1 00:06:03.051 --rc genhtml_legend=1 00:06:03.051 --rc geninfo_all_blocks=1 00:06:03.051 --rc geninfo_unexecuted_blocks=1 00:06:03.051 00:06:03.051 ' 00:06:03.051 18:23:20 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:03.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.051 --rc genhtml_branch_coverage=1 00:06:03.051 --rc genhtml_function_coverage=1 00:06:03.051 --rc genhtml_legend=1 00:06:03.051 --rc geninfo_all_blocks=1 00:06:03.051 --rc geninfo_unexecuted_blocks=1 00:06:03.051 00:06:03.051 ' 00:06:03.051 18:23:20 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:03.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.051 --rc genhtml_branch_coverage=1 00:06:03.051 --rc genhtml_function_coverage=1 00:06:03.051 --rc genhtml_legend=1 00:06:03.051 --rc geninfo_all_blocks=1 00:06:03.051 --rc geninfo_unexecuted_blocks=1 00:06:03.051 00:06:03.051 ' 00:06:03.051 18:23:20 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:03.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.051 --rc genhtml_branch_coverage=1 00:06:03.051 --rc genhtml_function_coverage=1 00:06:03.051 --rc genhtml_legend=1 00:06:03.051 --rc geninfo_all_blocks=1 00:06:03.051 --rc geninfo_unexecuted_blocks=1 00:06:03.051 00:06:03.051 ' 00:06:03.051 18:23:20 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:03.051 18:23:20 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:03.051 18:23:20 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:03.051 18:23:20 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:03.051 18:23:20 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:03.051 18:23:20 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:03.051 18:23:20 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:03.051 18:23:20 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:03.051 18:23:20 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:03.051 18:23:20 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:03.051 18:23:20 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:03.051 18:23:20 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:03.051 18:23:20 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:06:03.051 18:23:20 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:06:03.051 18:23:20 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:03.051 18:23:20 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:03.051 18:23:20 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:03.051 18:23:20 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:03.051 18:23:20 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:03.051 18:23:20 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:03.051 18:23:20 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.051 18:23:20 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.051 18:23:20 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.051 18:23:20 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.051 18:23:20 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.051 18:23:20 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.051 18:23:20 json_config -- paths/export.sh@5 -- # export PATH 00:06:03.051 18:23:20 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.051 18:23:20 json_config -- nvmf/common.sh@51 -- # : 0 00:06:03.051 18:23:20 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:03.051 18:23:20 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:03.051 18:23:20 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:03.051 18:23:20 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:03.051 18:23:20 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:03.051 18:23:20 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:03.051 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:03.051 18:23:20 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:03.051 18:23:20 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:03.051 18:23:20 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:03.051 18:23:20 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:03.051 18:23:20 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:03.051 18:23:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:03.051 18:23:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:03.051 18:23:20 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:03.051 18:23:20 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:03.051 18:23:20 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:03.051 18:23:20 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:03.051 18:23:20 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:03.051 18:23:20 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:03.051 18:23:20 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:03.051 18:23:20 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:03.051 18:23:20 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:03.052 18:23:20 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:03.052 18:23:20 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:03.052 INFO: JSON configuration test init 00:06:03.052 18:23:20 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:03.052 18:23:20 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:03.052 18:23:20 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:03.052 18:23:20 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:03.052 18:23:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.052 18:23:20 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:03.052 18:23:20 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:03.052 18:23:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.052 18:23:20 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:03.052 18:23:20 json_config -- json_config/common.sh@9 -- # local app=target 00:06:03.052 18:23:20 json_config -- json_config/common.sh@10 -- # shift 00:06:03.052 18:23:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:03.052 18:23:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:03.052 18:23:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:03.052 18:23:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.052 18:23:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.052 18:23:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=69381 00:06:03.052 Waiting for target to run... 00:06:03.052 18:23:20 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:03.052 18:23:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:03.052 18:23:20 json_config -- json_config/common.sh@25 -- # waitforlisten 69381 /var/tmp/spdk_tgt.sock 00:06:03.052 18:23:20 json_config -- common/autotest_common.sh@831 -- # '[' -z 69381 ']' 00:06:03.052 18:23:20 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:03.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:03.052 18:23:20 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.052 18:23:20 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:03.052 18:23:20 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.052 18:23:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.052 [2024-12-08 18:23:20.921358] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:03.052 [2024-12-08 18:23:20.921517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69381 ] 00:06:03.620 [2024-12-08 18:23:21.364204] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.620 [2024-12-08 18:23:21.433179] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.187 18:23:21 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.187 18:23:21 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:04.187 00:06:04.187 18:23:21 json_config -- json_config/common.sh@26 -- # echo '' 00:06:04.187 18:23:21 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:04.187 18:23:21 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:04.187 18:23:21 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:04.187 18:23:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.187 18:23:21 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:04.187 18:23:21 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:04.187 18:23:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:04.187 18:23:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.187 18:23:21 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:04.187 18:23:21 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:04.187 18:23:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:04.445 [2024-12-08 18:23:22.261882] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:04.703 18:23:22 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:04.703 18:23:22 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:04.703 18:23:22 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:04.703 18:23:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.703 18:23:22 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:04.703 18:23:22 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:04.703 18:23:22 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:04.703 18:23:22 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:04.703 18:23:22 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:04.703 18:23:22 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:04.703 18:23:22 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:04.703 18:23:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:04.962 18:23:22 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:04.962 18:23:22 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:04.962 18:23:22 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:04.962 18:23:22 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:04.962 18:23:22 json_config -- json_config/json_config.sh@54 -- # sort 00:06:04.962 18:23:22 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:04.962 18:23:22 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:04.962 18:23:22 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:04.962 18:23:22 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:04.962 18:23:22 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:04.962 18:23:22 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:04.962 18:23:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.962 18:23:22 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:04.962 18:23:22 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:04.962 18:23:22 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:04.962 18:23:22 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:04.962 18:23:22 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:04.962 18:23:22 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:04.962 18:23:22 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:04.962 18:23:22 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:04.962 18:23:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.962 18:23:22 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:04.962 18:23:22 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:04.962 18:23:22 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:04.962 18:23:22 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:04.962 18:23:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:05.221 MallocForNvmf0 00:06:05.221 18:23:22 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:05.221 18:23:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:05.489 MallocForNvmf1 00:06:05.489 18:23:23 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:05.489 18:23:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:05.747 [2024-12-08 18:23:23.537561] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:05.747 18:23:23 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:05.747 18:23:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:06.006 18:23:23 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:06.006 18:23:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:06.263 18:23:23 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:06.263 18:23:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:06.521 18:23:24 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:06.521 18:23:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:06.521 [2024-12-08 18:23:24.414092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:06.521 18:23:24 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:06.521 18:23:24 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:06.521 18:23:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.780 18:23:24 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:06.780 18:23:24 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:06.780 18:23:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.780 18:23:24 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:06.780 18:23:24 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:06.780 18:23:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:07.038 MallocBdevForConfigChangeCheck 00:06:07.038 18:23:24 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:07.038 18:23:24 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:07.038 18:23:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.038 18:23:24 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:07.038 18:23:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:07.295 INFO: shutting down applications... 00:06:07.295 18:23:25 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:07.295 18:23:25 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:07.295 18:23:25 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:07.295 18:23:25 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:07.295 18:23:25 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:07.862 Calling clear_iscsi_subsystem 00:06:07.862 Calling clear_nvmf_subsystem 00:06:07.862 Calling clear_nbd_subsystem 00:06:07.862 Calling clear_ublk_subsystem 00:06:07.862 Calling clear_vhost_blk_subsystem 00:06:07.862 Calling clear_vhost_scsi_subsystem 00:06:07.862 Calling clear_bdev_subsystem 00:06:07.862 18:23:25 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:07.862 18:23:25 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:07.862 18:23:25 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:07.862 18:23:25 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:07.862 18:23:25 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:07.862 18:23:25 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:08.121 18:23:25 json_config -- json_config/json_config.sh@352 -- # break 00:06:08.121 18:23:25 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:08.121 18:23:25 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:08.121 18:23:25 json_config -- json_config/common.sh@31 -- # local app=target 00:06:08.121 18:23:25 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:08.121 18:23:25 json_config -- json_config/common.sh@35 -- # [[ -n 69381 ]] 00:06:08.121 18:23:25 json_config -- json_config/common.sh@38 -- # kill -SIGINT 69381 00:06:08.121 18:23:25 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:08.121 18:23:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.121 18:23:25 json_config -- json_config/common.sh@41 -- # kill -0 69381 00:06:08.121 18:23:25 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:08.712 18:23:26 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:08.712 18:23:26 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.712 18:23:26 json_config -- json_config/common.sh@41 -- # kill -0 69381 00:06:08.712 18:23:26 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:08.712 18:23:26 json_config -- json_config/common.sh@43 -- # break 00:06:08.712 18:23:26 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:08.712 SPDK target shutdown done 00:06:08.712 18:23:26 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:08.712 INFO: relaunching applications... 00:06:08.712 18:23:26 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:08.712 18:23:26 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:08.712 18:23:26 json_config -- json_config/common.sh@9 -- # local app=target 00:06:08.712 18:23:26 json_config -- json_config/common.sh@10 -- # shift 00:06:08.712 18:23:26 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:08.712 18:23:26 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:08.712 18:23:26 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:08.712 18:23:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.712 18:23:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.712 18:23:26 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=69576 00:06:08.712 Waiting for target to run... 00:06:08.712 18:23:26 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:08.712 18:23:26 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:08.713 18:23:26 json_config -- json_config/common.sh@25 -- # waitforlisten 69576 /var/tmp/spdk_tgt.sock 00:06:08.713 18:23:26 json_config -- common/autotest_common.sh@831 -- # '[' -z 69576 ']' 00:06:08.713 18:23:26 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:08.713 18:23:26 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:08.713 18:23:26 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:08.713 18:23:26 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.713 18:23:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.713 [2024-12-08 18:23:26.544778] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:08.713 [2024-12-08 18:23:26.544896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69576 ] 00:06:09.281 [2024-12-08 18:23:26.959157] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.281 [2024-12-08 18:23:27.014112] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.281 [2024-12-08 18:23:27.148251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.541 [2024-12-08 18:23:27.356060] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.541 [2024-12-08 18:23:27.388120] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:09.541 00:06:09.541 INFO: Checking if target configuration is the same... 00:06:09.541 18:23:27 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.541 18:23:27 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:09.541 18:23:27 json_config -- json_config/common.sh@26 -- # echo '' 00:06:09.541 18:23:27 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:09.541 18:23:27 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:09.541 18:23:27 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:09.541 18:23:27 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:09.541 18:23:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:09.541 + '[' 2 -ne 2 ']' 00:06:09.541 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:09.801 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:09.801 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:09.801 +++ basename /dev/fd/62 00:06:09.801 ++ mktemp /tmp/62.XXX 00:06:09.801 + tmp_file_1=/tmp/62.ynI 00:06:09.801 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:09.801 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:09.801 + tmp_file_2=/tmp/spdk_tgt_config.json.lOV 00:06:09.801 + ret=0 00:06:09.801 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:10.062 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:10.062 + diff -u /tmp/62.ynI /tmp/spdk_tgt_config.json.lOV 00:06:10.062 INFO: JSON config files are the same 00:06:10.062 + echo 'INFO: JSON config files are the same' 00:06:10.062 + rm /tmp/62.ynI /tmp/spdk_tgt_config.json.lOV 00:06:10.062 + exit 0 00:06:10.062 INFO: changing configuration and checking if this can be detected... 00:06:10.062 18:23:27 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:10.062 18:23:27 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:10.062 18:23:27 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:10.062 18:23:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:10.322 18:23:28 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:10.322 18:23:28 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:10.322 18:23:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:10.322 + '[' 2 -ne 2 ']' 00:06:10.322 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:10.322 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:10.322 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:10.322 +++ basename /dev/fd/62 00:06:10.322 ++ mktemp /tmp/62.XXX 00:06:10.322 + tmp_file_1=/tmp/62.LBl 00:06:10.322 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:10.322 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:10.322 + tmp_file_2=/tmp/spdk_tgt_config.json.1kK 00:06:10.322 + ret=0 00:06:10.322 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:10.891 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:10.891 + diff -u /tmp/62.LBl /tmp/spdk_tgt_config.json.1kK 00:06:10.891 + ret=1 00:06:10.891 + echo '=== Start of file: /tmp/62.LBl ===' 00:06:10.891 + cat /tmp/62.LBl 00:06:10.891 + echo '=== End of file: /tmp/62.LBl ===' 00:06:10.892 + echo '' 00:06:10.892 + echo '=== Start of file: /tmp/spdk_tgt_config.json.1kK ===' 00:06:10.892 + cat /tmp/spdk_tgt_config.json.1kK 00:06:10.892 + echo '=== End of file: /tmp/spdk_tgt_config.json.1kK ===' 00:06:10.892 + echo '' 00:06:10.892 + rm /tmp/62.LBl /tmp/spdk_tgt_config.json.1kK 00:06:10.892 + exit 1 00:06:10.892 INFO: configuration change detected. 00:06:10.892 18:23:28 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:10.892 18:23:28 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:10.892 18:23:28 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:10.892 18:23:28 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:10.892 18:23:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.892 18:23:28 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:10.892 18:23:28 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:10.892 18:23:28 json_config -- json_config/json_config.sh@324 -- # [[ -n 69576 ]] 00:06:10.892 18:23:28 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:10.892 18:23:28 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:10.892 18:23:28 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:10.892 18:23:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.892 18:23:28 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:10.892 18:23:28 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:10.892 18:23:28 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:10.892 18:23:28 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:10.892 18:23:28 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:10.892 18:23:28 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:10.892 18:23:28 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:10.892 18:23:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.892 18:23:28 json_config -- json_config/json_config.sh@330 -- # killprocess 69576 00:06:10.892 18:23:28 json_config -- common/autotest_common.sh@950 -- # '[' -z 69576 ']' 00:06:10.892 18:23:28 json_config -- common/autotest_common.sh@954 -- # kill -0 69576 00:06:10.892 18:23:28 json_config -- common/autotest_common.sh@955 -- # uname 00:06:10.892 18:23:28 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:10.892 18:23:28 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69576 00:06:10.892 killing process with pid 69576 00:06:10.892 18:23:28 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:10.892 18:23:28 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:10.892 18:23:28 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69576' 00:06:10.892 18:23:28 json_config -- common/autotest_common.sh@969 -- # kill 69576 00:06:10.892 18:23:28 json_config -- common/autotest_common.sh@974 -- # wait 69576 00:06:11.151 18:23:28 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:11.151 18:23:28 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:11.151 18:23:28 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:11.151 18:23:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.151 INFO: Success 00:06:11.151 18:23:29 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:11.151 18:23:29 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:11.151 00:06:11.151 real 0m8.344s 00:06:11.151 user 0m11.829s 00:06:11.151 sys 0m1.736s 00:06:11.151 ************************************ 00:06:11.151 END TEST json_config 00:06:11.151 ************************************ 00:06:11.151 18:23:29 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.151 18:23:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.151 18:23:29 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:11.151 18:23:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.151 18:23:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.151 18:23:29 -- common/autotest_common.sh@10 -- # set +x 00:06:11.151 ************************************ 00:06:11.151 START TEST json_config_extra_key 00:06:11.151 ************************************ 00:06:11.151 18:23:29 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:11.411 18:23:29 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:11.411 18:23:29 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:11.411 18:23:29 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:11.411 18:23:29 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:11.411 18:23:29 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.411 18:23:29 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.411 18:23:29 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.411 18:23:29 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.411 18:23:29 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.411 18:23:29 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.411 18:23:29 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.411 18:23:29 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.411 18:23:29 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.411 18:23:29 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.411 18:23:29 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.411 18:23:29 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:11.411 18:23:29 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:11.411 18:23:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.411 18:23:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.411 18:23:29 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:11.411 18:23:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:11.411 18:23:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.411 18:23:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:11.411 18:23:29 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.411 18:23:29 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:11.411 18:23:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:11.411 18:23:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.411 18:23:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:11.411 18:23:29 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.411 18:23:29 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.411 18:23:29 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.411 18:23:29 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:11.411 18:23:29 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.411 18:23:29 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:11.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.411 --rc genhtml_branch_coverage=1 00:06:11.411 --rc genhtml_function_coverage=1 00:06:11.411 --rc genhtml_legend=1 00:06:11.411 --rc geninfo_all_blocks=1 00:06:11.411 --rc geninfo_unexecuted_blocks=1 00:06:11.411 00:06:11.411 ' 00:06:11.411 18:23:29 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:11.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.411 --rc genhtml_branch_coverage=1 00:06:11.411 --rc genhtml_function_coverage=1 00:06:11.411 --rc genhtml_legend=1 00:06:11.411 --rc geninfo_all_blocks=1 00:06:11.411 --rc geninfo_unexecuted_blocks=1 00:06:11.411 00:06:11.411 ' 00:06:11.411 18:23:29 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:11.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.411 --rc genhtml_branch_coverage=1 00:06:11.411 --rc genhtml_function_coverage=1 00:06:11.411 --rc genhtml_legend=1 00:06:11.411 --rc geninfo_all_blocks=1 00:06:11.411 --rc geninfo_unexecuted_blocks=1 00:06:11.411 00:06:11.411 ' 00:06:11.411 18:23:29 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:11.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.411 --rc genhtml_branch_coverage=1 00:06:11.411 --rc genhtml_function_coverage=1 00:06:11.411 --rc genhtml_legend=1 00:06:11.411 --rc geninfo_all_blocks=1 00:06:11.412 --rc geninfo_unexecuted_blocks=1 00:06:11.412 00:06:11.412 ' 00:06:11.412 18:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:11.412 18:23:29 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:11.412 18:23:29 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.412 18:23:29 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.412 18:23:29 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.412 18:23:29 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.412 18:23:29 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.412 18:23:29 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.412 18:23:29 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.412 18:23:29 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.412 18:23:29 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.412 18:23:29 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.412 18:23:29 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:06:11.412 18:23:29 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:06:11.412 18:23:29 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.412 18:23:29 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.412 18:23:29 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:11.412 18:23:29 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:11.412 18:23:29 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:11.412 18:23:29 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:11.412 18:23:29 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.412 18:23:29 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.412 18:23:29 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.412 18:23:29 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.412 18:23:29 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.412 18:23:29 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.412 18:23:29 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:11.412 18:23:29 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.412 18:23:29 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:11.412 18:23:29 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:11.412 18:23:29 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:11.412 18:23:29 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:11.412 18:23:29 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.412 18:23:29 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.412 18:23:29 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:11.412 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:11.412 18:23:29 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:11.412 18:23:29 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:11.412 18:23:29 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:11.412 18:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:11.412 18:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:11.412 18:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:11.412 18:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:11.412 INFO: launching applications... 00:06:11.412 Waiting for target to run... 00:06:11.412 18:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:11.412 18:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:11.412 18:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:11.412 18:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:11.412 18:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:11.412 18:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:11.412 18:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:11.412 18:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:11.412 18:23:29 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:11.412 18:23:29 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:11.412 18:23:29 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:11.412 18:23:29 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:11.412 18:23:29 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:11.412 18:23:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.412 18:23:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.412 18:23:29 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69725 00:06:11.412 18:23:29 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:11.412 18:23:29 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69725 /var/tmp/spdk_tgt.sock 00:06:11.412 18:23:29 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:11.412 18:23:29 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 69725 ']' 00:06:11.412 18:23:29 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:11.412 18:23:29 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:11.412 18:23:29 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:11.412 18:23:29 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.412 18:23:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:11.412 [2024-12-08 18:23:29.323458] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:11.412 [2024-12-08 18:23:29.323694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69725 ] 00:06:11.980 [2024-12-08 18:23:29.772096] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.980 [2024-12-08 18:23:29.828740] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.980 [2024-12-08 18:23:29.857244] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.548 18:23:30 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.548 18:23:30 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:12.548 18:23:30 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:12.548 00:06:12.548 INFO: shutting down applications... 00:06:12.548 18:23:30 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:12.548 18:23:30 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:12.548 18:23:30 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:12.548 18:23:30 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:12.548 18:23:30 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69725 ]] 00:06:12.548 18:23:30 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69725 00:06:12.548 18:23:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:12.548 18:23:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.548 18:23:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69725 00:06:12.548 18:23:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:13.127 18:23:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:13.127 18:23:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:13.127 18:23:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69725 00:06:13.127 18:23:30 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:13.127 18:23:30 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:13.127 18:23:30 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:13.127 18:23:30 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:13.127 SPDK target shutdown done 00:06:13.128 18:23:30 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:13.128 Success 00:06:13.128 00:06:13.128 real 0m1.755s 00:06:13.128 user 0m1.599s 00:06:13.128 sys 0m0.469s 00:06:13.128 ************************************ 00:06:13.128 END TEST json_config_extra_key 00:06:13.128 ************************************ 00:06:13.128 18:23:30 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.128 18:23:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:13.128 18:23:30 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:13.128 18:23:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.128 18:23:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.128 18:23:30 -- common/autotest_common.sh@10 -- # set +x 00:06:13.128 ************************************ 00:06:13.128 START TEST alias_rpc 00:06:13.128 ************************************ 00:06:13.128 18:23:30 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:13.128 * Looking for test storage... 00:06:13.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:13.128 18:23:30 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:13.128 18:23:30 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:13.128 18:23:30 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:13.128 18:23:31 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:13.128 18:23:31 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.128 18:23:31 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.128 18:23:31 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.128 18:23:31 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.128 18:23:31 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.128 18:23:31 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.128 18:23:31 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.128 18:23:31 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.128 18:23:31 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.128 18:23:31 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.128 18:23:31 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.128 18:23:31 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:13.128 18:23:31 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:13.128 18:23:31 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.128 18:23:31 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.128 18:23:31 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:13.128 18:23:31 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:13.128 18:23:31 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.128 18:23:31 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:13.128 18:23:31 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.128 18:23:31 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:13.128 18:23:31 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:13.128 18:23:31 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.128 18:23:31 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:13.128 18:23:31 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.128 18:23:31 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.128 18:23:31 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.128 18:23:31 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:13.128 18:23:31 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.128 18:23:31 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:13.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.128 --rc genhtml_branch_coverage=1 00:06:13.128 --rc genhtml_function_coverage=1 00:06:13.128 --rc genhtml_legend=1 00:06:13.128 --rc geninfo_all_blocks=1 00:06:13.128 --rc geninfo_unexecuted_blocks=1 00:06:13.128 00:06:13.128 ' 00:06:13.128 18:23:31 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:13.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.128 --rc genhtml_branch_coverage=1 00:06:13.128 --rc genhtml_function_coverage=1 00:06:13.128 --rc genhtml_legend=1 00:06:13.128 --rc geninfo_all_blocks=1 00:06:13.128 --rc geninfo_unexecuted_blocks=1 00:06:13.128 00:06:13.128 ' 00:06:13.128 18:23:31 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:13.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.129 --rc genhtml_branch_coverage=1 00:06:13.129 --rc genhtml_function_coverage=1 00:06:13.129 --rc genhtml_legend=1 00:06:13.129 --rc geninfo_all_blocks=1 00:06:13.129 --rc geninfo_unexecuted_blocks=1 00:06:13.129 00:06:13.129 ' 00:06:13.129 18:23:31 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:13.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.129 --rc genhtml_branch_coverage=1 00:06:13.129 --rc genhtml_function_coverage=1 00:06:13.129 --rc genhtml_legend=1 00:06:13.129 --rc geninfo_all_blocks=1 00:06:13.129 --rc geninfo_unexecuted_blocks=1 00:06:13.129 00:06:13.129 ' 00:06:13.129 18:23:31 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:13.129 18:23:31 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69803 00:06:13.129 18:23:31 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69803 00:06:13.129 18:23:31 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:13.129 18:23:31 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 69803 ']' 00:06:13.129 18:23:31 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.129 18:23:31 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.129 18:23:31 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.129 18:23:31 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.129 18:23:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.390 [2024-12-08 18:23:31.082372] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:13.390 [2024-12-08 18:23:31.082662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69803 ] 00:06:13.390 [2024-12-08 18:23:31.214441] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.390 [2024-12-08 18:23:31.297178] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.649 [2024-12-08 18:23:31.388710] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.908 18:23:31 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.908 18:23:31 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:13.908 18:23:31 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:14.166 18:23:31 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69803 00:06:14.166 18:23:31 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 69803 ']' 00:06:14.166 18:23:31 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 69803 00:06:14.166 18:23:31 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:14.166 18:23:31 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.166 18:23:31 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69803 00:06:14.166 killing process with pid 69803 00:06:14.166 18:23:31 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.166 18:23:31 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.166 18:23:31 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69803' 00:06:14.166 18:23:31 alias_rpc -- common/autotest_common.sh@969 -- # kill 69803 00:06:14.166 18:23:31 alias_rpc -- common/autotest_common.sh@974 -- # wait 69803 00:06:14.426 ************************************ 00:06:14.426 END TEST alias_rpc 00:06:14.426 ************************************ 00:06:14.426 00:06:14.426 real 0m1.474s 00:06:14.426 user 0m1.475s 00:06:14.426 sys 0m0.498s 00:06:14.426 18:23:32 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.426 18:23:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.685 18:23:32 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:14.685 18:23:32 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:14.685 18:23:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.685 18:23:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.685 18:23:32 -- common/autotest_common.sh@10 -- # set +x 00:06:14.685 ************************************ 00:06:14.685 START TEST spdkcli_tcp 00:06:14.685 ************************************ 00:06:14.685 18:23:32 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:14.685 * Looking for test storage... 00:06:14.685 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:14.685 18:23:32 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:14.685 18:23:32 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:14.685 18:23:32 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:14.685 18:23:32 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:14.685 18:23:32 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.685 18:23:32 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.685 18:23:32 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.685 18:23:32 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.685 18:23:32 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.685 18:23:32 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.685 18:23:32 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.685 18:23:32 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.685 18:23:32 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.685 18:23:32 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.685 18:23:32 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.685 18:23:32 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:14.685 18:23:32 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:14.685 18:23:32 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.685 18:23:32 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.685 18:23:32 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:14.685 18:23:32 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:14.685 18:23:32 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.685 18:23:32 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:14.685 18:23:32 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.685 18:23:32 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:14.685 18:23:32 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:14.686 18:23:32 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.686 18:23:32 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:14.686 18:23:32 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.686 18:23:32 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.686 18:23:32 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.686 18:23:32 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:14.686 18:23:32 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.686 18:23:32 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:14.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.686 --rc genhtml_branch_coverage=1 00:06:14.686 --rc genhtml_function_coverage=1 00:06:14.686 --rc genhtml_legend=1 00:06:14.686 --rc geninfo_all_blocks=1 00:06:14.686 --rc geninfo_unexecuted_blocks=1 00:06:14.686 00:06:14.686 ' 00:06:14.686 18:23:32 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:14.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.686 --rc genhtml_branch_coverage=1 00:06:14.686 --rc genhtml_function_coverage=1 00:06:14.686 --rc genhtml_legend=1 00:06:14.686 --rc geninfo_all_blocks=1 00:06:14.686 --rc geninfo_unexecuted_blocks=1 00:06:14.686 00:06:14.686 ' 00:06:14.686 18:23:32 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:14.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.686 --rc genhtml_branch_coverage=1 00:06:14.686 --rc genhtml_function_coverage=1 00:06:14.686 --rc genhtml_legend=1 00:06:14.686 --rc geninfo_all_blocks=1 00:06:14.686 --rc geninfo_unexecuted_blocks=1 00:06:14.686 00:06:14.686 ' 00:06:14.686 18:23:32 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:14.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.686 --rc genhtml_branch_coverage=1 00:06:14.686 --rc genhtml_function_coverage=1 00:06:14.686 --rc genhtml_legend=1 00:06:14.686 --rc geninfo_all_blocks=1 00:06:14.686 --rc geninfo_unexecuted_blocks=1 00:06:14.686 00:06:14.686 ' 00:06:14.686 18:23:32 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:14.686 18:23:32 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:14.686 18:23:32 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:14.686 18:23:32 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:14.686 18:23:32 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:14.686 18:23:32 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:14.686 18:23:32 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:14.686 18:23:32 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:14.686 18:23:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.686 18:23:32 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69879 00:06:14.686 18:23:32 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 69879 00:06:14.686 18:23:32 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:14.686 18:23:32 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 69879 ']' 00:06:14.686 18:23:32 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.686 18:23:32 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.686 18:23:32 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.686 18:23:32 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.686 18:23:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.945 [2024-12-08 18:23:32.644602] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:14.945 [2024-12-08 18:23:32.644703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69879 ] 00:06:14.945 [2024-12-08 18:23:32.779624] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.945 [2024-12-08 18:23:32.847868] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.945 [2024-12-08 18:23:32.847875] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.204 [2024-12-08 18:23:32.912396] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.204 18:23:33 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.204 18:23:33 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:15.204 18:23:33 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=69889 00:06:15.204 18:23:33 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:15.204 18:23:33 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:15.464 [ 00:06:15.464 "bdev_malloc_delete", 00:06:15.464 "bdev_malloc_create", 00:06:15.464 "bdev_null_resize", 00:06:15.464 "bdev_null_delete", 00:06:15.464 "bdev_null_create", 00:06:15.464 "bdev_nvme_cuse_unregister", 00:06:15.464 "bdev_nvme_cuse_register", 00:06:15.464 "bdev_opal_new_user", 00:06:15.464 "bdev_opal_set_lock_state", 00:06:15.464 "bdev_opal_delete", 00:06:15.464 "bdev_opal_get_info", 00:06:15.464 "bdev_opal_create", 00:06:15.464 "bdev_nvme_opal_revert", 00:06:15.464 "bdev_nvme_opal_init", 00:06:15.464 "bdev_nvme_send_cmd", 00:06:15.464 "bdev_nvme_set_keys", 00:06:15.464 "bdev_nvme_get_path_iostat", 00:06:15.464 "bdev_nvme_get_mdns_discovery_info", 00:06:15.464 "bdev_nvme_stop_mdns_discovery", 00:06:15.464 "bdev_nvme_start_mdns_discovery", 00:06:15.464 "bdev_nvme_set_multipath_policy", 00:06:15.464 "bdev_nvme_set_preferred_path", 00:06:15.464 "bdev_nvme_get_io_paths", 00:06:15.464 "bdev_nvme_remove_error_injection", 00:06:15.464 "bdev_nvme_add_error_injection", 00:06:15.464 "bdev_nvme_get_discovery_info", 00:06:15.464 "bdev_nvme_stop_discovery", 00:06:15.464 "bdev_nvme_start_discovery", 00:06:15.464 "bdev_nvme_get_controller_health_info", 00:06:15.464 "bdev_nvme_disable_controller", 00:06:15.464 "bdev_nvme_enable_controller", 00:06:15.464 "bdev_nvme_reset_controller", 00:06:15.464 "bdev_nvme_get_transport_statistics", 00:06:15.464 "bdev_nvme_apply_firmware", 00:06:15.464 "bdev_nvme_detach_controller", 00:06:15.464 "bdev_nvme_get_controllers", 00:06:15.464 "bdev_nvme_attach_controller", 00:06:15.464 "bdev_nvme_set_hotplug", 00:06:15.464 "bdev_nvme_set_options", 00:06:15.464 "bdev_passthru_delete", 00:06:15.464 "bdev_passthru_create", 00:06:15.464 "bdev_lvol_set_parent_bdev", 00:06:15.464 "bdev_lvol_set_parent", 00:06:15.464 "bdev_lvol_check_shallow_copy", 00:06:15.464 "bdev_lvol_start_shallow_copy", 00:06:15.464 "bdev_lvol_grow_lvstore", 00:06:15.464 "bdev_lvol_get_lvols", 00:06:15.464 "bdev_lvol_get_lvstores", 00:06:15.464 "bdev_lvol_delete", 00:06:15.464 "bdev_lvol_set_read_only", 00:06:15.464 "bdev_lvol_resize", 00:06:15.464 "bdev_lvol_decouple_parent", 00:06:15.464 "bdev_lvol_inflate", 00:06:15.464 "bdev_lvol_rename", 00:06:15.464 "bdev_lvol_clone_bdev", 00:06:15.464 "bdev_lvol_clone", 00:06:15.464 "bdev_lvol_snapshot", 00:06:15.464 "bdev_lvol_create", 00:06:15.464 "bdev_lvol_delete_lvstore", 00:06:15.464 "bdev_lvol_rename_lvstore", 00:06:15.464 "bdev_lvol_create_lvstore", 00:06:15.464 "bdev_raid_set_options", 00:06:15.464 "bdev_raid_remove_base_bdev", 00:06:15.464 "bdev_raid_add_base_bdev", 00:06:15.464 "bdev_raid_delete", 00:06:15.464 "bdev_raid_create", 00:06:15.464 "bdev_raid_get_bdevs", 00:06:15.464 "bdev_error_inject_error", 00:06:15.464 "bdev_error_delete", 00:06:15.464 "bdev_error_create", 00:06:15.464 "bdev_split_delete", 00:06:15.464 "bdev_split_create", 00:06:15.464 "bdev_delay_delete", 00:06:15.464 "bdev_delay_create", 00:06:15.464 "bdev_delay_update_latency", 00:06:15.464 "bdev_zone_block_delete", 00:06:15.464 "bdev_zone_block_create", 00:06:15.464 "blobfs_create", 00:06:15.464 "blobfs_detect", 00:06:15.464 "blobfs_set_cache_size", 00:06:15.464 "bdev_aio_delete", 00:06:15.464 "bdev_aio_rescan", 00:06:15.464 "bdev_aio_create", 00:06:15.464 "bdev_ftl_set_property", 00:06:15.464 "bdev_ftl_get_properties", 00:06:15.464 "bdev_ftl_get_stats", 00:06:15.464 "bdev_ftl_unmap", 00:06:15.464 "bdev_ftl_unload", 00:06:15.464 "bdev_ftl_delete", 00:06:15.464 "bdev_ftl_load", 00:06:15.464 "bdev_ftl_create", 00:06:15.464 "bdev_virtio_attach_controller", 00:06:15.464 "bdev_virtio_scsi_get_devices", 00:06:15.464 "bdev_virtio_detach_controller", 00:06:15.464 "bdev_virtio_blk_set_hotplug", 00:06:15.464 "bdev_iscsi_delete", 00:06:15.464 "bdev_iscsi_create", 00:06:15.464 "bdev_iscsi_set_options", 00:06:15.464 "bdev_uring_delete", 00:06:15.464 "bdev_uring_rescan", 00:06:15.464 "bdev_uring_create", 00:06:15.464 "accel_error_inject_error", 00:06:15.464 "ioat_scan_accel_module", 00:06:15.464 "dsa_scan_accel_module", 00:06:15.464 "iaa_scan_accel_module", 00:06:15.464 "keyring_file_remove_key", 00:06:15.464 "keyring_file_add_key", 00:06:15.464 "keyring_linux_set_options", 00:06:15.464 "fsdev_aio_delete", 00:06:15.464 "fsdev_aio_create", 00:06:15.464 "iscsi_get_histogram", 00:06:15.464 "iscsi_enable_histogram", 00:06:15.464 "iscsi_set_options", 00:06:15.464 "iscsi_get_auth_groups", 00:06:15.464 "iscsi_auth_group_remove_secret", 00:06:15.464 "iscsi_auth_group_add_secret", 00:06:15.464 "iscsi_delete_auth_group", 00:06:15.464 "iscsi_create_auth_group", 00:06:15.464 "iscsi_set_discovery_auth", 00:06:15.464 "iscsi_get_options", 00:06:15.464 "iscsi_target_node_request_logout", 00:06:15.464 "iscsi_target_node_set_redirect", 00:06:15.464 "iscsi_target_node_set_auth", 00:06:15.464 "iscsi_target_node_add_lun", 00:06:15.464 "iscsi_get_stats", 00:06:15.464 "iscsi_get_connections", 00:06:15.464 "iscsi_portal_group_set_auth", 00:06:15.464 "iscsi_start_portal_group", 00:06:15.464 "iscsi_delete_portal_group", 00:06:15.464 "iscsi_create_portal_group", 00:06:15.464 "iscsi_get_portal_groups", 00:06:15.464 "iscsi_delete_target_node", 00:06:15.464 "iscsi_target_node_remove_pg_ig_maps", 00:06:15.464 "iscsi_target_node_add_pg_ig_maps", 00:06:15.464 "iscsi_create_target_node", 00:06:15.464 "iscsi_get_target_nodes", 00:06:15.464 "iscsi_delete_initiator_group", 00:06:15.464 "iscsi_initiator_group_remove_initiators", 00:06:15.464 "iscsi_initiator_group_add_initiators", 00:06:15.464 "iscsi_create_initiator_group", 00:06:15.464 "iscsi_get_initiator_groups", 00:06:15.464 "nvmf_set_crdt", 00:06:15.464 "nvmf_set_config", 00:06:15.464 "nvmf_set_max_subsystems", 00:06:15.464 "nvmf_stop_mdns_prr", 00:06:15.464 "nvmf_publish_mdns_prr", 00:06:15.464 "nvmf_subsystem_get_listeners", 00:06:15.464 "nvmf_subsystem_get_qpairs", 00:06:15.464 "nvmf_subsystem_get_controllers", 00:06:15.464 "nvmf_get_stats", 00:06:15.464 "nvmf_get_transports", 00:06:15.464 "nvmf_create_transport", 00:06:15.464 "nvmf_get_targets", 00:06:15.464 "nvmf_delete_target", 00:06:15.464 "nvmf_create_target", 00:06:15.464 "nvmf_subsystem_allow_any_host", 00:06:15.464 "nvmf_subsystem_set_keys", 00:06:15.465 "nvmf_subsystem_remove_host", 00:06:15.465 "nvmf_subsystem_add_host", 00:06:15.465 "nvmf_ns_remove_host", 00:06:15.465 "nvmf_ns_add_host", 00:06:15.465 "nvmf_subsystem_remove_ns", 00:06:15.465 "nvmf_subsystem_set_ns_ana_group", 00:06:15.465 "nvmf_subsystem_add_ns", 00:06:15.465 "nvmf_subsystem_listener_set_ana_state", 00:06:15.465 "nvmf_discovery_get_referrals", 00:06:15.465 "nvmf_discovery_remove_referral", 00:06:15.465 "nvmf_discovery_add_referral", 00:06:15.465 "nvmf_subsystem_remove_listener", 00:06:15.465 "nvmf_subsystem_add_listener", 00:06:15.465 "nvmf_delete_subsystem", 00:06:15.465 "nvmf_create_subsystem", 00:06:15.465 "nvmf_get_subsystems", 00:06:15.465 "env_dpdk_get_mem_stats", 00:06:15.465 "nbd_get_disks", 00:06:15.465 "nbd_stop_disk", 00:06:15.465 "nbd_start_disk", 00:06:15.465 "ublk_recover_disk", 00:06:15.465 "ublk_get_disks", 00:06:15.465 "ublk_stop_disk", 00:06:15.465 "ublk_start_disk", 00:06:15.465 "ublk_destroy_target", 00:06:15.465 "ublk_create_target", 00:06:15.465 "virtio_blk_create_transport", 00:06:15.465 "virtio_blk_get_transports", 00:06:15.465 "vhost_controller_set_coalescing", 00:06:15.465 "vhost_get_controllers", 00:06:15.465 "vhost_delete_controller", 00:06:15.465 "vhost_create_blk_controller", 00:06:15.465 "vhost_scsi_controller_remove_target", 00:06:15.465 "vhost_scsi_controller_add_target", 00:06:15.465 "vhost_start_scsi_controller", 00:06:15.465 "vhost_create_scsi_controller", 00:06:15.465 "thread_set_cpumask", 00:06:15.465 "scheduler_set_options", 00:06:15.465 "framework_get_governor", 00:06:15.465 "framework_get_scheduler", 00:06:15.465 "framework_set_scheduler", 00:06:15.465 "framework_get_reactors", 00:06:15.465 "thread_get_io_channels", 00:06:15.465 "thread_get_pollers", 00:06:15.465 "thread_get_stats", 00:06:15.465 "framework_monitor_context_switch", 00:06:15.465 "spdk_kill_instance", 00:06:15.465 "log_enable_timestamps", 00:06:15.465 "log_get_flags", 00:06:15.465 "log_clear_flag", 00:06:15.465 "log_set_flag", 00:06:15.465 "log_get_level", 00:06:15.465 "log_set_level", 00:06:15.465 "log_get_print_level", 00:06:15.465 "log_set_print_level", 00:06:15.465 "framework_enable_cpumask_locks", 00:06:15.465 "framework_disable_cpumask_locks", 00:06:15.465 "framework_wait_init", 00:06:15.465 "framework_start_init", 00:06:15.465 "scsi_get_devices", 00:06:15.465 "bdev_get_histogram", 00:06:15.465 "bdev_enable_histogram", 00:06:15.465 "bdev_set_qos_limit", 00:06:15.465 "bdev_set_qd_sampling_period", 00:06:15.465 "bdev_get_bdevs", 00:06:15.465 "bdev_reset_iostat", 00:06:15.465 "bdev_get_iostat", 00:06:15.465 "bdev_examine", 00:06:15.465 "bdev_wait_for_examine", 00:06:15.465 "bdev_set_options", 00:06:15.465 "accel_get_stats", 00:06:15.465 "accel_set_options", 00:06:15.465 "accel_set_driver", 00:06:15.465 "accel_crypto_key_destroy", 00:06:15.465 "accel_crypto_keys_get", 00:06:15.465 "accel_crypto_key_create", 00:06:15.465 "accel_assign_opc", 00:06:15.465 "accel_get_module_info", 00:06:15.465 "accel_get_opc_assignments", 00:06:15.465 "vmd_rescan", 00:06:15.465 "vmd_remove_device", 00:06:15.465 "vmd_enable", 00:06:15.465 "sock_get_default_impl", 00:06:15.465 "sock_set_default_impl", 00:06:15.465 "sock_impl_set_options", 00:06:15.465 "sock_impl_get_options", 00:06:15.465 "iobuf_get_stats", 00:06:15.465 "iobuf_set_options", 00:06:15.465 "keyring_get_keys", 00:06:15.465 "framework_get_pci_devices", 00:06:15.465 "framework_get_config", 00:06:15.465 "framework_get_subsystems", 00:06:15.465 "fsdev_set_opts", 00:06:15.465 "fsdev_get_opts", 00:06:15.465 "trace_get_info", 00:06:15.465 "trace_get_tpoint_group_mask", 00:06:15.465 "trace_disable_tpoint_group", 00:06:15.465 "trace_enable_tpoint_group", 00:06:15.465 "trace_clear_tpoint_mask", 00:06:15.465 "trace_set_tpoint_mask", 00:06:15.465 "notify_get_notifications", 00:06:15.465 "notify_get_types", 00:06:15.465 "spdk_get_version", 00:06:15.465 "rpc_get_methods" 00:06:15.465 ] 00:06:15.724 18:23:33 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:15.724 18:23:33 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:15.724 18:23:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.724 18:23:33 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:15.724 18:23:33 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 69879 00:06:15.724 18:23:33 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 69879 ']' 00:06:15.724 18:23:33 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 69879 00:06:15.724 18:23:33 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:15.724 18:23:33 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.724 18:23:33 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69879 00:06:15.724 killing process with pid 69879 00:06:15.724 18:23:33 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.724 18:23:33 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.724 18:23:33 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69879' 00:06:15.724 18:23:33 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 69879 00:06:15.724 18:23:33 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 69879 00:06:15.983 ************************************ 00:06:15.983 END TEST spdkcli_tcp 00:06:15.983 ************************************ 00:06:15.983 00:06:15.983 real 0m1.465s 00:06:15.983 user 0m2.474s 00:06:15.983 sys 0m0.475s 00:06:15.983 18:23:33 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.983 18:23:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.983 18:23:33 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:15.983 18:23:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.983 18:23:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.983 18:23:33 -- common/autotest_common.sh@10 -- # set +x 00:06:15.983 ************************************ 00:06:15.983 START TEST dpdk_mem_utility 00:06:15.983 ************************************ 00:06:15.983 18:23:33 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:16.255 * Looking for test storage... 00:06:16.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:16.255 18:23:33 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:16.255 18:23:33 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:16.255 18:23:33 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:16.255 18:23:34 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:16.255 18:23:34 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.255 18:23:34 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.255 18:23:34 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.255 18:23:34 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.255 18:23:34 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.255 18:23:34 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.255 18:23:34 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.255 18:23:34 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.255 18:23:34 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.255 18:23:34 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.255 18:23:34 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.255 18:23:34 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:16.255 18:23:34 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:16.255 18:23:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.255 18:23:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.255 18:23:34 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:16.255 18:23:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:16.255 18:23:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.255 18:23:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:16.255 18:23:34 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.255 18:23:34 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:16.255 18:23:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:16.255 18:23:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.255 18:23:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:16.255 18:23:34 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.255 18:23:34 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.255 18:23:34 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.255 18:23:34 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:16.255 18:23:34 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.255 18:23:34 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:16.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.255 --rc genhtml_branch_coverage=1 00:06:16.255 --rc genhtml_function_coverage=1 00:06:16.255 --rc genhtml_legend=1 00:06:16.255 --rc geninfo_all_blocks=1 00:06:16.255 --rc geninfo_unexecuted_blocks=1 00:06:16.255 00:06:16.255 ' 00:06:16.255 18:23:34 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:16.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.255 --rc genhtml_branch_coverage=1 00:06:16.255 --rc genhtml_function_coverage=1 00:06:16.255 --rc genhtml_legend=1 00:06:16.255 --rc geninfo_all_blocks=1 00:06:16.255 --rc geninfo_unexecuted_blocks=1 00:06:16.255 00:06:16.256 ' 00:06:16.256 18:23:34 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:16.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.256 --rc genhtml_branch_coverage=1 00:06:16.256 --rc genhtml_function_coverage=1 00:06:16.256 --rc genhtml_legend=1 00:06:16.256 --rc geninfo_all_blocks=1 00:06:16.256 --rc geninfo_unexecuted_blocks=1 00:06:16.256 00:06:16.256 ' 00:06:16.256 18:23:34 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:16.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.256 --rc genhtml_branch_coverage=1 00:06:16.256 --rc genhtml_function_coverage=1 00:06:16.256 --rc genhtml_legend=1 00:06:16.256 --rc geninfo_all_blocks=1 00:06:16.256 --rc geninfo_unexecuted_blocks=1 00:06:16.256 00:06:16.256 ' 00:06:16.256 18:23:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:16.256 18:23:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=69971 00:06:16.256 18:23:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:16.256 18:23:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 69971 00:06:16.256 18:23:34 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 69971 ']' 00:06:16.256 18:23:34 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.256 18:23:34 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.256 18:23:34 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.256 18:23:34 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.256 18:23:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:16.256 [2024-12-08 18:23:34.144771] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:16.256 [2024-12-08 18:23:34.144888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69971 ] 00:06:16.549 [2024-12-08 18:23:34.282478] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.549 [2024-12-08 18:23:34.350798] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.549 [2024-12-08 18:23:34.414084] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.497 18:23:35 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.497 18:23:35 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:17.497 18:23:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:17.497 18:23:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:17.497 18:23:35 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.497 18:23:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:17.497 { 00:06:17.497 "filename": "/tmp/spdk_mem_dump.txt" 00:06:17.497 } 00:06:17.497 18:23:35 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.497 18:23:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:17.497 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:17.497 1 heaps totaling size 860.000000 MiB 00:06:17.497 size: 860.000000 MiB heap id: 0 00:06:17.497 end heaps---------- 00:06:17.497 9 mempools totaling size 642.649841 MiB 00:06:17.497 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:17.497 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:17.497 size: 92.545471 MiB name: bdev_io_69971 00:06:17.497 size: 51.011292 MiB name: evtpool_69971 00:06:17.497 size: 50.003479 MiB name: msgpool_69971 00:06:17.497 size: 36.509338 MiB name: fsdev_io_69971 00:06:17.497 size: 21.763794 MiB name: PDU_Pool 00:06:17.497 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:17.497 size: 0.026123 MiB name: Session_Pool 00:06:17.497 end mempools------- 00:06:17.497 6 memzones totaling size 4.142822 MiB 00:06:17.497 size: 1.000366 MiB name: RG_ring_0_69971 00:06:17.497 size: 1.000366 MiB name: RG_ring_1_69971 00:06:17.497 size: 1.000366 MiB name: RG_ring_4_69971 00:06:17.497 size: 1.000366 MiB name: RG_ring_5_69971 00:06:17.497 size: 0.125366 MiB name: RG_ring_2_69971 00:06:17.497 size: 0.015991 MiB name: RG_ring_3_69971 00:06:17.497 end memzones------- 00:06:17.497 18:23:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:17.497 heap id: 0 total size: 860.000000 MiB number of busy elements: 322 number of free elements: 16 00:06:17.497 list of free elements. size: 13.933777 MiB 00:06:17.497 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:17.497 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:17.497 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:17.497 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:17.497 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:17.497 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:17.497 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:17.497 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:17.497 element at address: 0x200000200000 with size: 0.834839 MiB 00:06:17.497 element at address: 0x20001d800000 with size: 0.566956 MiB 00:06:17.497 element at address: 0x20000d800000 with size: 0.489258 MiB 00:06:17.497 element at address: 0x200003e00000 with size: 0.487183 MiB 00:06:17.497 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:17.497 element at address: 0x200007000000 with size: 0.480286 MiB 00:06:17.497 element at address: 0x20002ac00000 with size: 0.396118 MiB 00:06:17.497 element at address: 0x200003a00000 with size: 0.352112 MiB 00:06:17.497 list of standard malloc elements. size: 199.269531 MiB 00:06:17.497 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:17.497 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:17.497 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:17.497 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:17.497 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:17.497 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:17.497 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:17.498 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:17.498 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:17.498 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003a5a240 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003a5e700 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003a7e9c0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003a7ea80 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003a7eb40 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003a7ec00 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003a7ecc0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7cb80 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7cc40 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7cd00 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7cdc0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7ce80 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7cf40 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20000707af40 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20000707b000 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20000707b180 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20000707b240 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20000707b300 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20000707b480 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20000707b540 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:17.498 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:17.498 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20001d891240 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20001d891300 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20001d8913c0 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20001d891480 with size: 0.000183 MiB 00:06:17.498 element at address: 0x20001d891540 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d891600 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d891780 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d891840 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d891900 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d892080 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d892140 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d892200 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d892380 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d892440 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d892500 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d892680 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d892740 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d892800 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d892980 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d893040 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d893100 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d893280 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d893340 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d893400 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d893580 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d893640 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d893700 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d893880 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d893940 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d894000 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d894180 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d894240 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d894300 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d894480 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d894540 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d894600 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d894780 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d894840 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d894900 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d895080 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d895140 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d895200 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac65680 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac65740 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6c340 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:06:17.499 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:06:17.500 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:06:17.500 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:06:17.500 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:06:17.500 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:06:17.500 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:06:17.500 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:06:17.500 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:06:17.500 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:06:17.500 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:06:17.500 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:06:17.500 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:06:17.500 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:06:17.500 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:17.500 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:17.500 list of memzone associated elements. size: 646.796692 MiB 00:06:17.500 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:17.500 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:17.500 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:17.500 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:17.500 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:17.500 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_69971_0 00:06:17.500 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:17.500 associated memzone info: size: 48.002930 MiB name: MP_evtpool_69971_0 00:06:17.500 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:17.500 associated memzone info: size: 48.002930 MiB name: MP_msgpool_69971_0 00:06:17.500 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:17.500 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_69971_0 00:06:17.500 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:17.500 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:17.500 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:17.500 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:17.500 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:17.500 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_69971 00:06:17.500 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:17.500 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_69971 00:06:17.500 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:17.500 associated memzone info: size: 1.007996 MiB name: MP_evtpool_69971 00:06:17.500 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:17.500 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:17.500 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:17.500 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:17.500 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:17.500 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:17.500 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:17.500 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:17.500 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:17.500 associated memzone info: size: 1.000366 MiB name: RG_ring_0_69971 00:06:17.500 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:17.500 associated memzone info: size: 1.000366 MiB name: RG_ring_1_69971 00:06:17.500 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:17.500 associated memzone info: size: 1.000366 MiB name: RG_ring_4_69971 00:06:17.500 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:17.500 associated memzone info: size: 1.000366 MiB name: RG_ring_5_69971 00:06:17.500 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:17.500 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_69971 00:06:17.500 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:17.500 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_69971 00:06:17.500 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:17.500 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:17.500 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:17.500 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:17.500 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:17.500 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:17.500 element at address: 0x200003a5e7c0 with size: 0.125488 MiB 00:06:17.500 associated memzone info: size: 0.125366 MiB name: RG_ring_2_69971 00:06:17.500 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:17.500 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:17.500 element at address: 0x20002ac65800 with size: 0.023743 MiB 00:06:17.500 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:17.500 element at address: 0x200003a5a500 with size: 0.016113 MiB 00:06:17.500 associated memzone info: size: 0.015991 MiB name: RG_ring_3_69971 00:06:17.500 element at address: 0x20002ac6b940 with size: 0.002441 MiB 00:06:17.500 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:17.500 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:17.500 associated memzone info: size: 0.000183 MiB name: MP_msgpool_69971 00:06:17.500 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:17.500 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_69971 00:06:17.500 element at address: 0x200003a5a300 with size: 0.000305 MiB 00:06:17.500 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_69971 00:06:17.500 element at address: 0x20002ac6c400 with size: 0.000305 MiB 00:06:17.500 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:17.500 18:23:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:17.500 18:23:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 69971 00:06:17.500 18:23:35 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 69971 ']' 00:06:17.500 18:23:35 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 69971 00:06:17.500 18:23:35 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:17.500 18:23:35 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.500 18:23:35 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69971 00:06:17.500 18:23:35 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.500 18:23:35 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.500 killing process with pid 69971 00:06:17.500 18:23:35 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69971' 00:06:17.500 18:23:35 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 69971 00:06:17.500 18:23:35 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 69971 00:06:17.759 00:06:17.759 real 0m1.726s 00:06:17.759 user 0m1.838s 00:06:17.759 sys 0m0.439s 00:06:17.759 18:23:35 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.759 18:23:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:17.759 ************************************ 00:06:17.759 END TEST dpdk_mem_utility 00:06:17.759 ************************************ 00:06:17.759 18:23:35 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:17.759 18:23:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.759 18:23:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.759 18:23:35 -- common/autotest_common.sh@10 -- # set +x 00:06:17.759 ************************************ 00:06:17.759 START TEST event 00:06:17.759 ************************************ 00:06:17.759 18:23:35 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:18.017 * Looking for test storage... 00:06:18.017 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:18.017 18:23:35 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:18.017 18:23:35 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:18.017 18:23:35 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:18.017 18:23:35 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:18.017 18:23:35 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.017 18:23:35 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.017 18:23:35 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.017 18:23:35 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.017 18:23:35 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.017 18:23:35 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.017 18:23:35 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.017 18:23:35 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.017 18:23:35 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.017 18:23:35 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.017 18:23:35 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.017 18:23:35 event -- scripts/common.sh@344 -- # case "$op" in 00:06:18.017 18:23:35 event -- scripts/common.sh@345 -- # : 1 00:06:18.017 18:23:35 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.017 18:23:35 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.017 18:23:35 event -- scripts/common.sh@365 -- # decimal 1 00:06:18.017 18:23:35 event -- scripts/common.sh@353 -- # local d=1 00:06:18.017 18:23:35 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.017 18:23:35 event -- scripts/common.sh@355 -- # echo 1 00:06:18.017 18:23:35 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.017 18:23:35 event -- scripts/common.sh@366 -- # decimal 2 00:06:18.017 18:23:35 event -- scripts/common.sh@353 -- # local d=2 00:06:18.017 18:23:35 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.017 18:23:35 event -- scripts/common.sh@355 -- # echo 2 00:06:18.017 18:23:35 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.017 18:23:35 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.017 18:23:35 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.017 18:23:35 event -- scripts/common.sh@368 -- # return 0 00:06:18.017 18:23:35 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.017 18:23:35 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:18.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.017 --rc genhtml_branch_coverage=1 00:06:18.017 --rc genhtml_function_coverage=1 00:06:18.017 --rc genhtml_legend=1 00:06:18.017 --rc geninfo_all_blocks=1 00:06:18.017 --rc geninfo_unexecuted_blocks=1 00:06:18.017 00:06:18.017 ' 00:06:18.017 18:23:35 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:18.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.017 --rc genhtml_branch_coverage=1 00:06:18.017 --rc genhtml_function_coverage=1 00:06:18.017 --rc genhtml_legend=1 00:06:18.017 --rc geninfo_all_blocks=1 00:06:18.017 --rc geninfo_unexecuted_blocks=1 00:06:18.017 00:06:18.017 ' 00:06:18.017 18:23:35 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:18.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.017 --rc genhtml_branch_coverage=1 00:06:18.017 --rc genhtml_function_coverage=1 00:06:18.017 --rc genhtml_legend=1 00:06:18.017 --rc geninfo_all_blocks=1 00:06:18.017 --rc geninfo_unexecuted_blocks=1 00:06:18.017 00:06:18.017 ' 00:06:18.017 18:23:35 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:18.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.017 --rc genhtml_branch_coverage=1 00:06:18.017 --rc genhtml_function_coverage=1 00:06:18.017 --rc genhtml_legend=1 00:06:18.017 --rc geninfo_all_blocks=1 00:06:18.017 --rc geninfo_unexecuted_blocks=1 00:06:18.017 00:06:18.017 ' 00:06:18.017 18:23:35 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:18.017 18:23:35 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:18.018 18:23:35 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:18.018 18:23:35 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:18.018 18:23:35 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.018 18:23:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.018 ************************************ 00:06:18.018 START TEST event_perf 00:06:18.018 ************************************ 00:06:18.018 18:23:35 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:18.018 Running I/O for 1 seconds...[2024-12-08 18:23:35.890135] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:18.018 [2024-12-08 18:23:35.890229] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70055 ] 00:06:18.276 [2024-12-08 18:23:36.025043] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:18.276 [2024-12-08 18:23:36.082793] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.276 [2024-12-08 18:23:36.082912] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.276 [2024-12-08 18:23:36.083057] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:18.276 [2024-12-08 18:23:36.083059] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.212 Running I/O for 1 seconds... 00:06:19.212 lcore 0: 180160 00:06:19.212 lcore 1: 180160 00:06:19.212 lcore 2: 180155 00:06:19.212 lcore 3: 180157 00:06:19.470 done. 00:06:19.470 00:06:19.470 real 0m1.272s 00:06:19.470 user 0m4.082s 00:06:19.470 sys 0m0.064s 00:06:19.470 18:23:37 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.470 18:23:37 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:19.470 ************************************ 00:06:19.470 END TEST event_perf 00:06:19.470 ************************************ 00:06:19.470 18:23:37 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:19.470 18:23:37 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:19.470 18:23:37 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.470 18:23:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.470 ************************************ 00:06:19.470 START TEST event_reactor 00:06:19.470 ************************************ 00:06:19.470 18:23:37 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:19.470 [2024-12-08 18:23:37.215529] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:19.470 [2024-12-08 18:23:37.215608] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70089 ] 00:06:19.470 [2024-12-08 18:23:37.344215] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.728 [2024-12-08 18:23:37.402112] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.709 test_start 00:06:20.709 oneshot 00:06:20.709 tick 100 00:06:20.709 tick 100 00:06:20.709 tick 250 00:06:20.709 tick 100 00:06:20.709 tick 100 00:06:20.709 tick 100 00:06:20.709 tick 250 00:06:20.709 tick 500 00:06:20.709 tick 100 00:06:20.709 tick 100 00:06:20.709 tick 250 00:06:20.709 tick 100 00:06:20.709 tick 100 00:06:20.709 test_end 00:06:20.709 00:06:20.709 real 0m1.255s 00:06:20.709 user 0m1.098s 00:06:20.709 sys 0m0.051s 00:06:20.709 18:23:38 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.709 18:23:38 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:20.709 ************************************ 00:06:20.709 END TEST event_reactor 00:06:20.709 ************************************ 00:06:20.709 18:23:38 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:20.709 18:23:38 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:20.709 18:23:38 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.709 18:23:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.709 ************************************ 00:06:20.709 START TEST event_reactor_perf 00:06:20.709 ************************************ 00:06:20.709 18:23:38 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:20.709 [2024-12-08 18:23:38.524050] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:20.709 [2024-12-08 18:23:38.524145] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70119 ] 00:06:20.967 [2024-12-08 18:23:38.658366] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.967 [2024-12-08 18:23:38.715267] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.901 test_start 00:06:21.901 test_end 00:06:21.901 Performance: 464838 events per second 00:06:21.901 ************************************ 00:06:21.901 END TEST event_reactor_perf 00:06:21.901 ************************************ 00:06:21.901 00:06:21.901 real 0m1.263s 00:06:21.901 user 0m1.098s 00:06:21.901 sys 0m0.060s 00:06:21.901 18:23:39 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.901 18:23:39 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:21.901 18:23:39 event -- event/event.sh@49 -- # uname -s 00:06:21.901 18:23:39 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:21.901 18:23:39 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:21.901 18:23:39 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.901 18:23:39 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.901 18:23:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.901 ************************************ 00:06:21.901 START TEST event_scheduler 00:06:21.901 ************************************ 00:06:21.901 18:23:39 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:22.161 * Looking for test storage... 00:06:22.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:22.161 18:23:39 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:22.161 18:23:39 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:22.161 18:23:39 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:22.161 18:23:40 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:22.161 18:23:40 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.161 18:23:40 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.161 18:23:40 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.161 18:23:40 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.161 18:23:40 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.161 18:23:40 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.161 18:23:40 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.161 18:23:40 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.161 18:23:40 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.161 18:23:40 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.161 18:23:40 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.161 18:23:40 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:22.161 18:23:40 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:22.161 18:23:40 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.161 18:23:40 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.161 18:23:40 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:22.161 18:23:40 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:22.161 18:23:40 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.161 18:23:40 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:22.161 18:23:40 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.161 18:23:40 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:22.161 18:23:40 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:22.161 18:23:40 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.161 18:23:40 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:22.161 18:23:40 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.161 18:23:40 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.161 18:23:40 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.161 18:23:40 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:22.161 18:23:40 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.161 18:23:40 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:22.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.161 --rc genhtml_branch_coverage=1 00:06:22.161 --rc genhtml_function_coverage=1 00:06:22.161 --rc genhtml_legend=1 00:06:22.161 --rc geninfo_all_blocks=1 00:06:22.161 --rc geninfo_unexecuted_blocks=1 00:06:22.161 00:06:22.161 ' 00:06:22.161 18:23:40 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:22.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.161 --rc genhtml_branch_coverage=1 00:06:22.161 --rc genhtml_function_coverage=1 00:06:22.161 --rc genhtml_legend=1 00:06:22.161 --rc geninfo_all_blocks=1 00:06:22.161 --rc geninfo_unexecuted_blocks=1 00:06:22.161 00:06:22.161 ' 00:06:22.161 18:23:40 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:22.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.161 --rc genhtml_branch_coverage=1 00:06:22.161 --rc genhtml_function_coverage=1 00:06:22.161 --rc genhtml_legend=1 00:06:22.161 --rc geninfo_all_blocks=1 00:06:22.161 --rc geninfo_unexecuted_blocks=1 00:06:22.161 00:06:22.161 ' 00:06:22.161 18:23:40 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:22.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.161 --rc genhtml_branch_coverage=1 00:06:22.161 --rc genhtml_function_coverage=1 00:06:22.161 --rc genhtml_legend=1 00:06:22.161 --rc geninfo_all_blocks=1 00:06:22.161 --rc geninfo_unexecuted_blocks=1 00:06:22.161 00:06:22.161 ' 00:06:22.161 18:23:40 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:22.161 18:23:40 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70194 00:06:22.161 18:23:40 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:22.161 18:23:40 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:22.161 18:23:40 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70194 00:06:22.161 18:23:40 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70194 ']' 00:06:22.161 18:23:40 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.161 18:23:40 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.161 18:23:40 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.161 18:23:40 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.161 18:23:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:22.161 [2024-12-08 18:23:40.071714] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:22.161 [2024-12-08 18:23:40.071816] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70194 ] 00:06:22.420 [2024-12-08 18:23:40.212022] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:22.420 [2024-12-08 18:23:40.286454] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.420 [2024-12-08 18:23:40.286671] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.420 [2024-12-08 18:23:40.286597] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.420 [2024-12-08 18:23:40.286672] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.356 18:23:41 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.356 18:23:41 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:23.356 18:23:41 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:23.356 18:23:41 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.356 18:23:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:23.356 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:23.356 POWER: Cannot set governor of lcore 0 to userspace 00:06:23.356 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:23.356 POWER: Cannot set governor of lcore 0 to performance 00:06:23.356 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:23.356 POWER: Cannot set governor of lcore 0 to userspace 00:06:23.357 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:23.357 POWER: Unable to set Power Management Environment for lcore 0 00:06:23.357 [2024-12-08 18:23:41.094807] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:23.357 [2024-12-08 18:23:41.095002] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:23.357 [2024-12-08 18:23:41.095250] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:23.357 [2024-12-08 18:23:41.095276] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:23.357 [2024-12-08 18:23:41.095285] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:23.357 [2024-12-08 18:23:41.095292] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:23.357 18:23:41 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.357 18:23:41 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:23.357 18:23:41 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.357 18:23:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:23.357 [2024-12-08 18:23:41.153748] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.357 [2024-12-08 18:23:41.183118] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:23.357 18:23:41 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.357 18:23:41 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:23.357 18:23:41 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.357 18:23:41 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.357 18:23:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:23.357 ************************************ 00:06:23.357 START TEST scheduler_create_thread 00:06:23.357 ************************************ 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.357 2 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.357 3 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.357 4 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.357 5 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.357 6 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.357 7 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.357 8 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.357 9 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.357 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.617 10 00:06:23.617 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.617 18:23:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:23.617 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.617 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.617 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.617 18:23:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:23.617 18:23:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:23.617 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.617 18:23:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.554 18:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.554 18:23:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:24.554 18:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.554 18:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.932 18:23:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.932 18:23:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:25.932 18:23:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:25.932 18:23:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.932 18:23:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.868 ************************************ 00:06:26.868 END TEST scheduler_create_thread 00:06:26.868 ************************************ 00:06:26.868 18:23:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.868 00:06:26.868 real 0m3.373s 00:06:26.868 user 0m0.012s 00:06:26.868 sys 0m0.007s 00:06:26.868 18:23:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.868 18:23:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.868 18:23:44 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:26.868 18:23:44 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70194 00:06:26.868 18:23:44 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70194 ']' 00:06:26.868 18:23:44 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70194 00:06:26.868 18:23:44 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:26.868 18:23:44 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:26.868 18:23:44 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70194 00:06:26.868 killing process with pid 70194 00:06:26.868 18:23:44 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:26.868 18:23:44 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:26.868 18:23:44 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70194' 00:06:26.868 18:23:44 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70194 00:06:26.868 18:23:44 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70194 00:06:27.127 [2024-12-08 18:23:44.947127] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:27.384 00:06:27.384 real 0m5.357s 00:06:27.384 user 0m11.096s 00:06:27.384 sys 0m0.408s 00:06:27.384 18:23:45 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.384 18:23:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.384 ************************************ 00:06:27.384 END TEST event_scheduler 00:06:27.384 ************************************ 00:06:27.384 18:23:45 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:27.384 18:23:45 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:27.384 18:23:45 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.384 18:23:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.384 18:23:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.384 ************************************ 00:06:27.384 START TEST app_repeat 00:06:27.384 ************************************ 00:06:27.384 18:23:45 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:27.384 18:23:45 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.384 18:23:45 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.384 18:23:45 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:27.384 18:23:45 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.384 18:23:45 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:27.384 18:23:45 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:27.384 18:23:45 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:27.384 Process app_repeat pid: 70299 00:06:27.384 18:23:45 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70299 00:06:27.384 18:23:45 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:27.384 18:23:45 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:27.384 18:23:45 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70299' 00:06:27.384 18:23:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:27.384 spdk_app_start Round 0 00:06:27.384 18:23:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:27.384 18:23:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70299 /var/tmp/spdk-nbd.sock 00:06:27.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:27.384 18:23:45 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70299 ']' 00:06:27.384 18:23:45 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:27.385 18:23:45 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.385 18:23:45 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:27.385 18:23:45 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.385 18:23:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:27.385 [2024-12-08 18:23:45.269859] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:27.385 [2024-12-08 18:23:45.269932] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70299 ] 00:06:27.643 [2024-12-08 18:23:45.396623] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:27.643 [2024-12-08 18:23:45.469377] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.643 [2024-12-08 18:23:45.469399] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.643 [2024-12-08 18:23:45.521344] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.577 18:23:46 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.577 18:23:46 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:28.577 18:23:46 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.835 Malloc0 00:06:28.835 18:23:46 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.094 Malloc1 00:06:29.094 18:23:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.094 18:23:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.094 18:23:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.094 18:23:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:29.094 18:23:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.094 18:23:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:29.094 18:23:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.094 18:23:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.094 18:23:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.094 18:23:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:29.094 18:23:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.094 18:23:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:29.094 18:23:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:29.094 18:23:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:29.094 18:23:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.094 18:23:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:29.353 /dev/nbd0 00:06:29.353 18:23:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:29.353 18:23:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:29.353 18:23:47 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:29.353 18:23:47 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:29.353 18:23:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:29.353 18:23:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:29.353 18:23:47 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:29.353 18:23:47 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:29.353 18:23:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:29.353 18:23:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:29.353 18:23:47 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.353 1+0 records in 00:06:29.353 1+0 records out 00:06:29.353 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179942 s, 22.8 MB/s 00:06:29.353 18:23:47 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.353 18:23:47 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:29.353 18:23:47 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.353 18:23:47 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:29.353 18:23:47 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:29.353 18:23:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.353 18:23:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.353 18:23:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:29.611 /dev/nbd1 00:06:29.611 18:23:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:29.611 18:23:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:29.611 18:23:47 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:29.611 18:23:47 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:29.611 18:23:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:29.611 18:23:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:29.611 18:23:47 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:29.611 18:23:47 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:29.611 18:23:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:29.611 18:23:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:29.611 18:23:47 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.611 1+0 records in 00:06:29.611 1+0 records out 00:06:29.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268639 s, 15.2 MB/s 00:06:29.611 18:23:47 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.611 18:23:47 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:29.611 18:23:47 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.611 18:23:47 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:29.611 18:23:47 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:29.611 18:23:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.611 18:23:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.611 18:23:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.611 18:23:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.611 18:23:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.870 18:23:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:29.870 { 00:06:29.870 "nbd_device": "/dev/nbd0", 00:06:29.870 "bdev_name": "Malloc0" 00:06:29.870 }, 00:06:29.870 { 00:06:29.870 "nbd_device": "/dev/nbd1", 00:06:29.870 "bdev_name": "Malloc1" 00:06:29.870 } 00:06:29.870 ]' 00:06:29.870 18:23:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.870 18:23:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:29.870 { 00:06:29.870 "nbd_device": "/dev/nbd0", 00:06:29.870 "bdev_name": "Malloc0" 00:06:29.870 }, 00:06:29.870 { 00:06:29.870 "nbd_device": "/dev/nbd1", 00:06:29.870 "bdev_name": "Malloc1" 00:06:29.870 } 00:06:29.870 ]' 00:06:29.870 18:23:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:29.870 /dev/nbd1' 00:06:29.870 18:23:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:29.870 /dev/nbd1' 00:06:29.870 18:23:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.870 18:23:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:29.870 18:23:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:29.870 18:23:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:29.870 18:23:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:29.870 18:23:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:29.870 18:23:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.870 18:23:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.870 18:23:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:29.870 18:23:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.870 18:23:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:29.871 18:23:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:29.871 256+0 records in 00:06:29.871 256+0 records out 00:06:29.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00766955 s, 137 MB/s 00:06:29.871 18:23:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.871 18:23:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:29.871 256+0 records in 00:06:29.871 256+0 records out 00:06:29.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222871 s, 47.0 MB/s 00:06:29.871 18:23:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.871 18:23:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:30.142 256+0 records in 00:06:30.142 256+0 records out 00:06:30.142 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.037131 s, 28.2 MB/s 00:06:30.142 18:23:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:30.142 18:23:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.142 18:23:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.142 18:23:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:30.142 18:23:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.142 18:23:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:30.142 18:23:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:30.142 18:23:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.142 18:23:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:30.142 18:23:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.142 18:23:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:30.142 18:23:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.142 18:23:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:30.142 18:23:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.142 18:23:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.142 18:23:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:30.142 18:23:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:30.142 18:23:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.142 18:23:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:30.401 18:23:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:30.401 18:23:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:30.401 18:23:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:30.401 18:23:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.401 18:23:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.401 18:23:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:30.401 18:23:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.401 18:23:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.401 18:23:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.401 18:23:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:30.660 18:23:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:30.660 18:23:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:30.660 18:23:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:30.660 18:23:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.660 18:23:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.660 18:23:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:30.660 18:23:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.660 18:23:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.660 18:23:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.660 18:23:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.660 18:23:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.919 18:23:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:30.919 18:23:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.919 18:23:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:30.919 18:23:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:30.919 18:23:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.919 18:23:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:30.919 18:23:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:30.919 18:23:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:30.919 18:23:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:30.919 18:23:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:30.919 18:23:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:30.919 18:23:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:30.919 18:23:48 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:31.178 18:23:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:31.437 [2024-12-08 18:23:49.209755] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.437 [2024-12-08 18:23:49.260521] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.437 [2024-12-08 18:23:49.260532] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.437 [2024-12-08 18:23:49.312945] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.437 [2024-12-08 18:23:49.313073] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:31.437 [2024-12-08 18:23:49.313102] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:34.722 spdk_app_start Round 1 00:06:34.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:34.722 18:23:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:34.722 18:23:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:34.722 18:23:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70299 /var/tmp/spdk-nbd.sock 00:06:34.722 18:23:52 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70299 ']' 00:06:34.722 18:23:52 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:34.722 18:23:52 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.722 18:23:52 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:34.722 18:23:52 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.722 18:23:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:34.722 18:23:52 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.722 18:23:52 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:34.722 18:23:52 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.722 Malloc0 00:06:34.722 18:23:52 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.981 Malloc1 00:06:34.981 18:23:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:34.981 18:23:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.981 18:23:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.981 18:23:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:34.981 18:23:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.981 18:23:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:34.981 18:23:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:34.981 18:23:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.981 18:23:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.981 18:23:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:34.981 18:23:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.981 18:23:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:34.981 18:23:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:34.981 18:23:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:34.981 18:23:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.981 18:23:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:35.239 /dev/nbd0 00:06:35.239 18:23:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:35.239 18:23:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:35.239 18:23:53 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:35.239 18:23:53 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:35.239 18:23:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:35.239 18:23:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:35.239 18:23:53 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:35.239 18:23:53 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:35.239 18:23:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:35.239 18:23:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:35.239 18:23:53 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:35.239 1+0 records in 00:06:35.239 1+0 records out 00:06:35.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214725 s, 19.1 MB/s 00:06:35.239 18:23:53 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:35.239 18:23:53 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:35.239 18:23:53 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:35.239 18:23:53 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:35.239 18:23:53 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:35.239 18:23:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.239 18:23:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.239 18:23:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:35.497 /dev/nbd1 00:06:35.497 18:23:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:35.497 18:23:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:35.497 18:23:53 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:35.497 18:23:53 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:35.497 18:23:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:35.497 18:23:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:35.497 18:23:53 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:35.497 18:23:53 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:35.497 18:23:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:35.497 18:23:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:35.497 18:23:53 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:35.497 1+0 records in 00:06:35.497 1+0 records out 00:06:35.497 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306192 s, 13.4 MB/s 00:06:35.497 18:23:53 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:35.497 18:23:53 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:35.497 18:23:53 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:35.497 18:23:53 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:35.497 18:23:53 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:35.497 18:23:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.497 18:23:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.497 18:23:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.497 18:23:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.497 18:23:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:36.065 { 00:06:36.065 "nbd_device": "/dev/nbd0", 00:06:36.065 "bdev_name": "Malloc0" 00:06:36.065 }, 00:06:36.065 { 00:06:36.065 "nbd_device": "/dev/nbd1", 00:06:36.065 "bdev_name": "Malloc1" 00:06:36.065 } 00:06:36.065 ]' 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:36.065 { 00:06:36.065 "nbd_device": "/dev/nbd0", 00:06:36.065 "bdev_name": "Malloc0" 00:06:36.065 }, 00:06:36.065 { 00:06:36.065 "nbd_device": "/dev/nbd1", 00:06:36.065 "bdev_name": "Malloc1" 00:06:36.065 } 00:06:36.065 ]' 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:36.065 /dev/nbd1' 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:36.065 /dev/nbd1' 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:36.065 256+0 records in 00:06:36.065 256+0 records out 00:06:36.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107706 s, 97.4 MB/s 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:36.065 256+0 records in 00:06:36.065 256+0 records out 00:06:36.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219487 s, 47.8 MB/s 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:36.065 256+0 records in 00:06:36.065 256+0 records out 00:06:36.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023096 s, 45.4 MB/s 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.065 18:23:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:36.325 18:23:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:36.325 18:23:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:36.325 18:23:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:36.325 18:23:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.325 18:23:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.325 18:23:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:36.325 18:23:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.325 18:23:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.325 18:23:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.325 18:23:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:36.584 18:23:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:36.584 18:23:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:36.584 18:23:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:36.584 18:23:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.584 18:23:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.584 18:23:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:36.584 18:23:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.584 18:23:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.584 18:23:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.584 18:23:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.584 18:23:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.843 18:23:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:36.843 18:23:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:36.843 18:23:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.843 18:23:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:36.843 18:23:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.843 18:23:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:36.843 18:23:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:36.843 18:23:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:36.843 18:23:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:36.843 18:23:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:36.843 18:23:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:36.843 18:23:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:36.843 18:23:54 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:37.102 18:23:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:37.359 [2024-12-08 18:23:55.113187] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.359 [2024-12-08 18:23:55.156572] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.359 [2024-12-08 18:23:55.156583] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.359 [2024-12-08 18:23:55.208527] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.359 [2024-12-08 18:23:55.208630] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:37.359 [2024-12-08 18:23:55.208644] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:40.648 spdk_app_start Round 2 00:06:40.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:40.648 18:23:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:40.648 18:23:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:40.648 18:23:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70299 /var/tmp/spdk-nbd.sock 00:06:40.648 18:23:57 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70299 ']' 00:06:40.648 18:23:57 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:40.648 18:23:57 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.648 18:23:57 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:40.648 18:23:57 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.648 18:23:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:40.648 18:23:58 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.648 18:23:58 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:40.648 18:23:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.648 Malloc0 00:06:40.648 18:23:58 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.908 Malloc1 00:06:40.908 18:23:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.908 18:23:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.908 18:23:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.908 18:23:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:40.908 18:23:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.908 18:23:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:40.908 18:23:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.908 18:23:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.908 18:23:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.908 18:23:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:40.908 18:23:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.908 18:23:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:40.908 18:23:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:40.908 18:23:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:40.908 18:23:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.908 18:23:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:41.167 /dev/nbd0 00:06:41.167 18:23:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:41.167 18:23:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:41.167 18:23:59 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:41.167 18:23:59 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:41.167 18:23:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:41.167 18:23:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:41.167 18:23:59 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:41.167 18:23:59 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:41.167 18:23:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:41.167 18:23:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:41.167 18:23:59 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.167 1+0 records in 00:06:41.167 1+0 records out 00:06:41.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407225 s, 10.1 MB/s 00:06:41.167 18:23:59 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.167 18:23:59 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:41.167 18:23:59 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.167 18:23:59 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:41.167 18:23:59 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:41.167 18:23:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.167 18:23:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.167 18:23:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:41.451 /dev/nbd1 00:06:41.719 18:23:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:41.719 18:23:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:41.719 18:23:59 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:41.719 18:23:59 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:41.719 18:23:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:41.719 18:23:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:41.719 18:23:59 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:41.719 18:23:59 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:41.719 18:23:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:41.719 18:23:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:41.719 18:23:59 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.719 1+0 records in 00:06:41.719 1+0 records out 00:06:41.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313966 s, 13.0 MB/s 00:06:41.719 18:23:59 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.719 18:23:59 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:41.719 18:23:59 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.719 18:23:59 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:41.719 18:23:59 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:41.720 18:23:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.720 18:23:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.720 18:23:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.720 18:23:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.720 18:23:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:41.979 { 00:06:41.979 "nbd_device": "/dev/nbd0", 00:06:41.979 "bdev_name": "Malloc0" 00:06:41.979 }, 00:06:41.979 { 00:06:41.979 "nbd_device": "/dev/nbd1", 00:06:41.979 "bdev_name": "Malloc1" 00:06:41.979 } 00:06:41.979 ]' 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:41.979 { 00:06:41.979 "nbd_device": "/dev/nbd0", 00:06:41.979 "bdev_name": "Malloc0" 00:06:41.979 }, 00:06:41.979 { 00:06:41.979 "nbd_device": "/dev/nbd1", 00:06:41.979 "bdev_name": "Malloc1" 00:06:41.979 } 00:06:41.979 ]' 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:41.979 /dev/nbd1' 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:41.979 /dev/nbd1' 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:41.979 256+0 records in 00:06:41.979 256+0 records out 00:06:41.979 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104034 s, 101 MB/s 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:41.979 256+0 records in 00:06:41.979 256+0 records out 00:06:41.979 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210467 s, 49.8 MB/s 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:41.979 256+0 records in 00:06:41.979 256+0 records out 00:06:41.979 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243866 s, 43.0 MB/s 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.979 18:23:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:42.238 18:24:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:42.238 18:24:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:42.238 18:24:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:42.238 18:24:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.238 18:24:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.238 18:24:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:42.238 18:24:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.238 18:24:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.238 18:24:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.238 18:24:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:42.497 18:24:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:42.497 18:24:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:42.497 18:24:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:42.497 18:24:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.497 18:24:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.497 18:24:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:42.497 18:24:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.497 18:24:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.497 18:24:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.497 18:24:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.497 18:24:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.756 18:24:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:42.756 18:24:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:42.756 18:24:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.756 18:24:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:42.756 18:24:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:42.756 18:24:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.015 18:24:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:43.015 18:24:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:43.015 18:24:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:43.015 18:24:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:43.015 18:24:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:43.015 18:24:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:43.015 18:24:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:43.015 18:24:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:43.273 [2024-12-08 18:24:01.086463] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.273 [2024-12-08 18:24:01.129201] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.273 [2024-12-08 18:24:01.129213] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.273 [2024-12-08 18:24:01.179574] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.273 [2024-12-08 18:24:01.179696] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:43.273 [2024-12-08 18:24:01.179710] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:46.560 18:24:03 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70299 /var/tmp/spdk-nbd.sock 00:06:46.560 18:24:03 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70299 ']' 00:06:46.560 18:24:03 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:46.560 18:24:03 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:46.560 18:24:03 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:46.560 18:24:03 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.560 18:24:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.560 18:24:04 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.560 18:24:04 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:46.560 18:24:04 event.app_repeat -- event/event.sh@39 -- # killprocess 70299 00:06:46.560 18:24:04 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70299 ']' 00:06:46.560 18:24:04 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70299 00:06:46.560 18:24:04 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:46.560 18:24:04 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:46.560 18:24:04 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70299 00:06:46.560 18:24:04 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:46.560 18:24:04 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:46.560 killing process with pid 70299 00:06:46.560 18:24:04 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70299' 00:06:46.560 18:24:04 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70299 00:06:46.560 18:24:04 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70299 00:06:46.560 spdk_app_start is called in Round 0. 00:06:46.560 Shutdown signal received, stop current app iteration 00:06:46.560 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:06:46.560 spdk_app_start is called in Round 1. 00:06:46.560 Shutdown signal received, stop current app iteration 00:06:46.560 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:06:46.560 spdk_app_start is called in Round 2. 00:06:46.560 Shutdown signal received, stop current app iteration 00:06:46.560 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:06:46.560 spdk_app_start is called in Round 3. 00:06:46.560 Shutdown signal received, stop current app iteration 00:06:46.560 18:24:04 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:46.560 18:24:04 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:46.560 00:06:46.560 real 0m19.151s 00:06:46.560 user 0m43.401s 00:06:46.560 sys 0m2.795s 00:06:46.560 18:24:04 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.560 18:24:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.560 ************************************ 00:06:46.560 END TEST app_repeat 00:06:46.560 ************************************ 00:06:46.560 18:24:04 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:46.560 18:24:04 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:46.560 18:24:04 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.560 18:24:04 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.560 18:24:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.560 ************************************ 00:06:46.560 START TEST cpu_locks 00:06:46.560 ************************************ 00:06:46.560 18:24:04 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:46.820 * Looking for test storage... 00:06:46.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:46.820 18:24:04 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:46.820 18:24:04 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:46.820 18:24:04 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:46.820 18:24:04 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:46.820 18:24:04 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.820 18:24:04 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.820 18:24:04 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.820 18:24:04 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.820 18:24:04 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.820 18:24:04 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.820 18:24:04 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.820 18:24:04 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.820 18:24:04 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.820 18:24:04 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.820 18:24:04 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.820 18:24:04 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:46.820 18:24:04 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:46.820 18:24:04 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.820 18:24:04 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.820 18:24:04 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:46.820 18:24:04 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:46.820 18:24:04 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.821 18:24:04 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:46.821 18:24:04 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.821 18:24:04 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:46.821 18:24:04 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:46.821 18:24:04 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.821 18:24:04 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:46.821 18:24:04 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.821 18:24:04 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.821 18:24:04 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.821 18:24:04 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:46.821 18:24:04 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.821 18:24:04 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:46.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.821 --rc genhtml_branch_coverage=1 00:06:46.821 --rc genhtml_function_coverage=1 00:06:46.821 --rc genhtml_legend=1 00:06:46.821 --rc geninfo_all_blocks=1 00:06:46.821 --rc geninfo_unexecuted_blocks=1 00:06:46.821 00:06:46.821 ' 00:06:46.821 18:24:04 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:46.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.821 --rc genhtml_branch_coverage=1 00:06:46.821 --rc genhtml_function_coverage=1 00:06:46.821 --rc genhtml_legend=1 00:06:46.821 --rc geninfo_all_blocks=1 00:06:46.821 --rc geninfo_unexecuted_blocks=1 00:06:46.821 00:06:46.821 ' 00:06:46.821 18:24:04 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:46.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.821 --rc genhtml_branch_coverage=1 00:06:46.821 --rc genhtml_function_coverage=1 00:06:46.821 --rc genhtml_legend=1 00:06:46.821 --rc geninfo_all_blocks=1 00:06:46.821 --rc geninfo_unexecuted_blocks=1 00:06:46.821 00:06:46.821 ' 00:06:46.821 18:24:04 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:46.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.821 --rc genhtml_branch_coverage=1 00:06:46.821 --rc genhtml_function_coverage=1 00:06:46.821 --rc genhtml_legend=1 00:06:46.821 --rc geninfo_all_blocks=1 00:06:46.821 --rc geninfo_unexecuted_blocks=1 00:06:46.821 00:06:46.821 ' 00:06:46.821 18:24:04 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:46.821 18:24:04 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:46.821 18:24:04 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:46.821 18:24:04 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:46.821 18:24:04 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.821 18:24:04 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.821 18:24:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.821 ************************************ 00:06:46.821 START TEST default_locks 00:06:46.821 ************************************ 00:06:46.821 18:24:04 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:46.821 18:24:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70740 00:06:46.821 18:24:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:46.821 18:24:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70740 00:06:46.821 18:24:04 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70740 ']' 00:06:46.821 18:24:04 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.821 18:24:04 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.821 18:24:04 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.821 18:24:04 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.821 18:24:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.821 [2024-12-08 18:24:04.711392] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:46.821 [2024-12-08 18:24:04.711565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70740 ] 00:06:47.082 [2024-12-08 18:24:04.860293] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.082 [2024-12-08 18:24:04.916089] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.082 [2024-12-08 18:24:04.976525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.341 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.341 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:47.341 18:24:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70740 00:06:47.341 18:24:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.341 18:24:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70740 00:06:47.909 18:24:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70740 00:06:47.909 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 70740 ']' 00:06:47.909 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 70740 00:06:47.909 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:47.909 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.909 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70740 00:06:47.909 killing process with pid 70740 00:06:47.909 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.909 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.909 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70740' 00:06:47.909 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 70740 00:06:47.909 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 70740 00:06:48.167 18:24:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70740 00:06:48.167 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:48.167 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70740 00:06:48.167 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:48.168 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.168 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:48.168 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.168 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 70740 00:06:48.168 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70740 ']' 00:06:48.168 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.168 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.168 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.168 ERROR: process (pid: 70740) is no longer running 00:06:48.168 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.168 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.168 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70740) - No such process 00:06:48.168 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.168 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:48.168 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:48.168 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:48.168 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:48.168 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:48.168 18:24:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:48.168 18:24:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:48.168 18:24:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:48.168 18:24:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:48.168 00:06:48.168 real 0m1.361s 00:06:48.168 user 0m1.325s 00:06:48.168 sys 0m0.571s 00:06:48.168 ************************************ 00:06:48.168 END TEST default_locks 00:06:48.168 ************************************ 00:06:48.168 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.168 18:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.168 18:24:06 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:48.168 18:24:06 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.168 18:24:06 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.168 18:24:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.168 ************************************ 00:06:48.168 START TEST default_locks_via_rpc 00:06:48.168 ************************************ 00:06:48.168 18:24:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:48.168 18:24:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70784 00:06:48.168 18:24:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70784 00:06:48.168 18:24:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:48.168 18:24:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70784 ']' 00:06:48.168 18:24:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.168 18:24:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.168 18:24:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.168 18:24:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.168 18:24:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.427 [2024-12-08 18:24:06.122544] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:48.427 [2024-12-08 18:24:06.122703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70784 ] 00:06:48.427 [2024-12-08 18:24:06.271900] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.427 [2024-12-08 18:24:06.327267] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.686 [2024-12-08 18:24:06.389236] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.253 18:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.253 18:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:49.253 18:24:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:49.253 18:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.253 18:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.253 18:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.253 18:24:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:49.253 18:24:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:49.253 18:24:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:49.253 18:24:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:49.253 18:24:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:49.253 18:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.253 18:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.253 18:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.253 18:24:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70784 00:06:49.253 18:24:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70784 00:06:49.253 18:24:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.820 18:24:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70784 00:06:49.820 18:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 70784 ']' 00:06:49.820 18:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 70784 00:06:49.820 18:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:49.820 18:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.820 18:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70784 00:06:49.820 killing process with pid 70784 00:06:49.820 18:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:49.820 18:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:49.820 18:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70784' 00:06:49.820 18:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 70784 00:06:49.820 18:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 70784 00:06:50.389 ************************************ 00:06:50.389 END TEST default_locks_via_rpc 00:06:50.389 ************************************ 00:06:50.389 00:06:50.389 real 0m2.031s 00:06:50.389 user 0m2.250s 00:06:50.389 sys 0m0.603s 00:06:50.389 18:24:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.389 18:24:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.389 18:24:08 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:50.389 18:24:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:50.389 18:24:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.389 18:24:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.389 ************************************ 00:06:50.389 START TEST non_locking_app_on_locked_coremask 00:06:50.389 ************************************ 00:06:50.389 18:24:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:50.389 18:24:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70837 00:06:50.389 18:24:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70837 /var/tmp/spdk.sock 00:06:50.389 18:24:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:50.389 18:24:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70837 ']' 00:06:50.389 18:24:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.389 18:24:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.389 18:24:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.389 18:24:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.389 18:24:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.389 [2024-12-08 18:24:08.206224] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:50.389 [2024-12-08 18:24:08.206368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70837 ] 00:06:50.648 [2024-12-08 18:24:08.355970] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.648 [2024-12-08 18:24:08.426439] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.648 [2024-12-08 18:24:08.487590] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.584 18:24:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.584 18:24:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:51.584 18:24:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70853 00:06:51.584 18:24:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:51.584 18:24:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70853 /var/tmp/spdk2.sock 00:06:51.584 18:24:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70853 ']' 00:06:51.584 18:24:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.584 18:24:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.584 18:24:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.584 18:24:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.584 18:24:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.584 [2024-12-08 18:24:09.295556] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:51.584 [2024-12-08 18:24:09.295666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70853 ] 00:06:51.584 [2024-12-08 18:24:09.432545] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.584 [2024-12-08 18:24:09.432590] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.843 [2024-12-08 18:24:09.565720] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.843 [2024-12-08 18:24:09.689900] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.431 18:24:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.431 18:24:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:52.431 18:24:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70837 00:06:52.432 18:24:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70837 00:06:52.432 18:24:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.370 18:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70837 00:06:53.370 18:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70837 ']' 00:06:53.370 18:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70837 00:06:53.370 18:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:53.370 18:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.370 18:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70837 00:06:53.370 18:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.370 killing process with pid 70837 00:06:53.370 18:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.370 18:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70837' 00:06:53.370 18:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70837 00:06:53.370 18:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70837 00:06:53.939 18:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70853 00:06:53.939 18:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70853 ']' 00:06:53.939 18:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70853 00:06:53.939 18:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:53.939 18:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.939 18:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70853 00:06:53.939 18:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.939 18:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.939 killing process with pid 70853 00:06:53.939 18:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70853' 00:06:53.939 18:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70853 00:06:53.939 18:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70853 00:06:54.508 00:06:54.508 real 0m4.072s 00:06:54.508 user 0m4.523s 00:06:54.508 sys 0m1.214s 00:06:54.508 18:24:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.508 18:24:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.508 ************************************ 00:06:54.508 END TEST non_locking_app_on_locked_coremask 00:06:54.508 ************************************ 00:06:54.508 18:24:12 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:54.508 18:24:12 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.508 18:24:12 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.508 18:24:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.508 ************************************ 00:06:54.508 START TEST locking_app_on_unlocked_coremask 00:06:54.508 ************************************ 00:06:54.508 18:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:54.508 18:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70920 00:06:54.508 18:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70920 /var/tmp/spdk.sock 00:06:54.508 18:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70920 ']' 00:06:54.508 18:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:54.508 18:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.508 18:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.508 18:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.508 18:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.508 18:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.508 [2024-12-08 18:24:12.299169] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:54.508 [2024-12-08 18:24:12.299270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70920 ] 00:06:54.508 [2024-12-08 18:24:12.433691] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:54.508 [2024-12-08 18:24:12.433741] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.767 [2024-12-08 18:24:12.488660] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.767 [2024-12-08 18:24:12.548831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.027 18:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.027 18:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:55.027 18:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=70929 00:06:55.027 18:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:55.027 18:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 70929 /var/tmp/spdk2.sock 00:06:55.027 18:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70929 ']' 00:06:55.027 18:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.027 18:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.027 18:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.027 18:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.027 18:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.027 [2024-12-08 18:24:12.772191] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:55.027 [2024-12-08 18:24:12.772301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70929 ] 00:06:55.027 [2024-12-08 18:24:12.906604] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.286 [2024-12-08 18:24:13.058399] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.286 [2024-12-08 18:24:13.185385] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.223 18:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.223 18:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:56.223 18:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 70929 00:06:56.223 18:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:56.223 18:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70929 00:06:57.161 18:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70920 00:06:57.161 18:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70920 ']' 00:06:57.161 18:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 70920 00:06:57.161 18:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:57.161 18:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.161 18:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70920 00:06:57.161 18:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.161 killing process with pid 70920 00:06:57.161 18:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.161 18:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70920' 00:06:57.161 18:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 70920 00:06:57.161 18:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 70920 00:06:57.728 18:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 70929 00:06:57.728 18:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70929 ']' 00:06:57.728 18:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 70929 00:06:57.728 18:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:57.728 18:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.728 18:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70929 00:06:57.728 18:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.728 killing process with pid 70929 00:06:57.728 18:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.728 18:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70929' 00:06:57.729 18:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 70929 00:06:57.729 18:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 70929 00:06:57.987 00:06:57.987 real 0m3.620s 00:06:57.987 user 0m3.954s 00:06:57.987 sys 0m1.136s 00:06:57.987 ************************************ 00:06:57.987 END TEST locking_app_on_unlocked_coremask 00:06:57.987 ************************************ 00:06:57.987 18:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.987 18:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.987 18:24:15 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:57.987 18:24:15 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.987 18:24:15 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.987 18:24:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.987 ************************************ 00:06:57.987 START TEST locking_app_on_locked_coremask 00:06:57.987 ************************************ 00:06:57.987 18:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:57.987 18:24:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=70996 00:06:57.987 18:24:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 70996 /var/tmp/spdk.sock 00:06:57.987 18:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70996 ']' 00:06:57.987 18:24:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:57.987 18:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.987 18:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.987 18:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.987 18:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.987 18:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.245 [2024-12-08 18:24:15.970552] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:58.245 [2024-12-08 18:24:15.970652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70996 ] 00:06:58.245 [2024-12-08 18:24:16.106639] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.245 [2024-12-08 18:24:16.170805] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.503 [2024-12-08 18:24:16.234840] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.503 18:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.503 18:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:58.503 18:24:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71004 00:06:58.503 18:24:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:58.503 18:24:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71004 /var/tmp/spdk2.sock 00:06:58.503 18:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:58.503 18:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71004 /var/tmp/spdk2.sock 00:06:58.503 18:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:58.503 18:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.503 18:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:58.503 18:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.503 18:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71004 /var/tmp/spdk2.sock 00:06:58.503 18:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71004 ']' 00:06:58.503 18:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.503 18:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.503 18:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.503 18:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.503 18:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.762 [2024-12-08 18:24:16.482541] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:58.762 [2024-12-08 18:24:16.482651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71004 ] 00:06:58.762 [2024-12-08 18:24:16.620460] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 70996 has claimed it. 00:06:58.762 [2024-12-08 18:24:16.620520] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:59.330 ERROR: process (pid: 71004) is no longer running 00:06:59.330 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71004) - No such process 00:06:59.330 18:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.330 18:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:59.330 18:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:59.330 18:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:59.330 18:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:59.330 18:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:59.330 18:24:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 70996 00:06:59.330 18:24:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70996 00:06:59.330 18:24:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:59.898 18:24:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 70996 00:06:59.898 18:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70996 ']' 00:06:59.898 18:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70996 00:06:59.898 18:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:59.898 18:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.898 18:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70996 00:06:59.898 killing process with pid 70996 00:06:59.898 18:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:59.898 18:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:59.898 18:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70996' 00:06:59.898 18:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70996 00:06:59.898 18:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70996 00:07:00.157 00:07:00.157 real 0m2.100s 00:07:00.157 user 0m2.351s 00:07:00.157 sys 0m0.616s 00:07:00.157 18:24:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.157 18:24:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.157 ************************************ 00:07:00.157 END TEST locking_app_on_locked_coremask 00:07:00.157 ************************************ 00:07:00.157 18:24:18 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:00.157 18:24:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.157 18:24:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.157 18:24:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.157 ************************************ 00:07:00.157 START TEST locking_overlapped_coremask 00:07:00.157 ************************************ 00:07:00.157 18:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:00.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.157 18:24:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71050 00:07:00.157 18:24:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:00.157 18:24:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71050 /var/tmp/spdk.sock 00:07:00.157 18:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71050 ']' 00:07:00.157 18:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.157 18:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.157 18:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.157 18:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.157 18:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.416 [2024-12-08 18:24:18.109266] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:00.416 [2024-12-08 18:24:18.109354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71050 ] 00:07:00.416 [2024-12-08 18:24:18.232488] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:00.416 [2024-12-08 18:24:18.293912] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.416 [2024-12-08 18:24:18.294053] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.416 [2024-12-08 18:24:18.294056] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.675 [2024-12-08 18:24:18.361098] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.675 18:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.675 18:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:00.675 18:24:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71060 00:07:00.675 18:24:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:00.675 18:24:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71060 /var/tmp/spdk2.sock 00:07:00.675 18:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:00.675 18:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71060 /var/tmp/spdk2.sock 00:07:00.675 18:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:00.675 18:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.675 18:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:00.675 18:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.675 18:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71060 /var/tmp/spdk2.sock 00:07:00.675 18:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71060 ']' 00:07:00.675 18:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.675 18:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.675 18:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.675 18:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.675 18:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.935 [2024-12-08 18:24:18.610075] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:00.935 [2024-12-08 18:24:18.610183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71060 ] 00:07:00.935 [2024-12-08 18:24:18.751665] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71050 has claimed it. 00:07:00.935 [2024-12-08 18:24:18.751750] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:01.504 ERROR: process (pid: 71060) is no longer running 00:07:01.504 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71060) - No such process 00:07:01.504 18:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.504 18:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:01.504 18:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:01.504 18:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.504 18:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:01.504 18:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.504 18:24:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:01.504 18:24:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:01.504 18:24:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:01.504 18:24:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:01.504 18:24:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71050 00:07:01.504 18:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71050 ']' 00:07:01.504 18:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71050 00:07:01.504 18:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:01.504 18:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:01.504 18:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71050 00:07:01.504 18:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:01.504 18:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:01.504 18:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71050' 00:07:01.504 killing process with pid 71050 00:07:01.504 18:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71050 00:07:01.504 18:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71050 00:07:02.073 00:07:02.073 real 0m1.647s 00:07:02.073 user 0m4.401s 00:07:02.073 sys 0m0.409s 00:07:02.073 18:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.073 18:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.073 ************************************ 00:07:02.073 END TEST locking_overlapped_coremask 00:07:02.073 ************************************ 00:07:02.073 18:24:19 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:02.073 18:24:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.073 18:24:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.073 18:24:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.073 ************************************ 00:07:02.073 START TEST locking_overlapped_coremask_via_rpc 00:07:02.073 ************************************ 00:07:02.073 18:24:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:02.073 18:24:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71100 00:07:02.073 18:24:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71100 /var/tmp/spdk.sock 00:07:02.073 18:24:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:02.073 18:24:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71100 ']' 00:07:02.073 18:24:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.073 18:24:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.074 18:24:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.074 18:24:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.074 18:24:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.074 [2024-12-08 18:24:19.827618] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:02.074 [2024-12-08 18:24:19.827727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71100 ] 00:07:02.074 [2024-12-08 18:24:19.960918] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:02.074 [2024-12-08 18:24:19.961156] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:02.333 [2024-12-08 18:24:20.045027] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.333 [2024-12-08 18:24:20.045168] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.333 [2024-12-08 18:24:20.045170] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.333 [2024-12-08 18:24:20.109914] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:02.901 18:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.901 18:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:02.901 18:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71120 00:07:02.901 18:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71120 /var/tmp/spdk2.sock 00:07:02.901 18:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:02.901 18:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71120 ']' 00:07:02.901 18:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:02.901 18:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.901 18:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:02.901 18:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.901 18:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.161 [2024-12-08 18:24:20.861068] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:03.161 [2024-12-08 18:24:20.861431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71120 ] 00:07:03.161 [2024-12-08 18:24:21.006942] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:03.161 [2024-12-08 18:24:21.006999] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:03.420 [2024-12-08 18:24:21.174111] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.420 [2024-12-08 18:24:21.174197] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:03.421 [2024-12-08 18:24:21.174200] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.421 [2024-12-08 18:24:21.311200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.987 18:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.987 18:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:03.987 18:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:03.987 18:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.987 18:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.987 18:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.987 18:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:03.987 18:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:03.987 18:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:03.987 18:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:03.987 18:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.987 18:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:03.987 18:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.987 18:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:03.987 18:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.987 18:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.987 [2024-12-08 18:24:21.876599] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71100 has claimed it. 00:07:03.987 request: 00:07:03.987 { 00:07:03.987 "method": "framework_enable_cpumask_locks", 00:07:03.987 "req_id": 1 00:07:03.987 } 00:07:03.987 Got JSON-RPC error response 00:07:03.987 response: 00:07:03.987 { 00:07:03.987 "code": -32603, 00:07:03.987 "message": "Failed to claim CPU core: 2" 00:07:03.987 } 00:07:03.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.987 18:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:03.987 18:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:03.987 18:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.987 18:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:03.987 18:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.987 18:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71100 /var/tmp/spdk.sock 00:07:03.987 18:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71100 ']' 00:07:03.987 18:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.987 18:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.987 18:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.987 18:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.987 18:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.245 18:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.245 18:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:04.245 18:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71120 /var/tmp/spdk2.sock 00:07:04.245 18:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71120 ']' 00:07:04.245 18:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:04.245 18:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.245 18:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:04.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:04.245 18:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.245 18:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.503 18:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.503 18:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:04.503 18:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:04.503 ************************************ 00:07:04.503 END TEST locking_overlapped_coremask_via_rpc 00:07:04.503 ************************************ 00:07:04.503 18:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:04.503 18:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:04.503 18:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:04.503 00:07:04.503 real 0m2.658s 00:07:04.503 user 0m1.392s 00:07:04.503 sys 0m0.193s 00:07:04.503 18:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.503 18:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.761 18:24:22 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:04.761 18:24:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71100 ]] 00:07:04.761 18:24:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71100 00:07:04.761 18:24:22 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71100 ']' 00:07:04.761 18:24:22 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71100 00:07:04.761 18:24:22 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:04.761 18:24:22 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.761 18:24:22 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71100 00:07:04.761 killing process with pid 71100 00:07:04.761 18:24:22 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.761 18:24:22 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.761 18:24:22 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71100' 00:07:04.761 18:24:22 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71100 00:07:04.761 18:24:22 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71100 00:07:05.020 18:24:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71120 ]] 00:07:05.020 18:24:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71120 00:07:05.020 18:24:22 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71120 ']' 00:07:05.020 18:24:22 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71120 00:07:05.020 18:24:22 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:05.020 18:24:22 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.020 18:24:22 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71120 00:07:05.020 killing process with pid 71120 00:07:05.020 18:24:22 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:05.020 18:24:22 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:05.020 18:24:22 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71120' 00:07:05.020 18:24:22 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71120 00:07:05.020 18:24:22 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71120 00:07:05.587 18:24:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:05.587 18:24:23 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:05.587 18:24:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71100 ]] 00:07:05.587 18:24:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71100 00:07:05.587 18:24:23 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71100 ']' 00:07:05.587 Process with pid 71100 is not found 00:07:05.587 18:24:23 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71100 00:07:05.587 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71100) - No such process 00:07:05.587 18:24:23 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71100 is not found' 00:07:05.587 18:24:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71120 ]] 00:07:05.587 18:24:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71120 00:07:05.587 18:24:23 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71120 ']' 00:07:05.587 18:24:23 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71120 00:07:05.587 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71120) - No such process 00:07:05.587 Process with pid 71120 is not found 00:07:05.588 18:24:23 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71120 is not found' 00:07:05.588 18:24:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:05.588 00:07:05.588 real 0m18.836s 00:07:05.588 user 0m32.861s 00:07:05.588 sys 0m5.658s 00:07:05.588 ************************************ 00:07:05.588 END TEST cpu_locks 00:07:05.588 ************************************ 00:07:05.588 18:24:23 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.588 18:24:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.588 ************************************ 00:07:05.588 END TEST event 00:07:05.588 ************************************ 00:07:05.588 00:07:05.588 real 0m47.635s 00:07:05.588 user 1m33.839s 00:07:05.588 sys 0m9.316s 00:07:05.588 18:24:23 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.588 18:24:23 event -- common/autotest_common.sh@10 -- # set +x 00:07:05.588 18:24:23 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:05.588 18:24:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.588 18:24:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.588 18:24:23 -- common/autotest_common.sh@10 -- # set +x 00:07:05.588 ************************************ 00:07:05.588 START TEST thread 00:07:05.588 ************************************ 00:07:05.588 18:24:23 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:05.588 * Looking for test storage... 00:07:05.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:05.588 18:24:23 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:05.588 18:24:23 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:05.588 18:24:23 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:05.847 18:24:23 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:05.847 18:24:23 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.847 18:24:23 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.847 18:24:23 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.847 18:24:23 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.847 18:24:23 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.847 18:24:23 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.847 18:24:23 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.847 18:24:23 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.847 18:24:23 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.847 18:24:23 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.847 18:24:23 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.847 18:24:23 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:05.847 18:24:23 thread -- scripts/common.sh@345 -- # : 1 00:07:05.847 18:24:23 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.847 18:24:23 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.847 18:24:23 thread -- scripts/common.sh@365 -- # decimal 1 00:07:05.847 18:24:23 thread -- scripts/common.sh@353 -- # local d=1 00:07:05.847 18:24:23 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.847 18:24:23 thread -- scripts/common.sh@355 -- # echo 1 00:07:05.847 18:24:23 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.847 18:24:23 thread -- scripts/common.sh@366 -- # decimal 2 00:07:05.847 18:24:23 thread -- scripts/common.sh@353 -- # local d=2 00:07:05.847 18:24:23 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.847 18:24:23 thread -- scripts/common.sh@355 -- # echo 2 00:07:05.847 18:24:23 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.847 18:24:23 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.847 18:24:23 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.847 18:24:23 thread -- scripts/common.sh@368 -- # return 0 00:07:05.847 18:24:23 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.847 18:24:23 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:05.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.847 --rc genhtml_branch_coverage=1 00:07:05.847 --rc genhtml_function_coverage=1 00:07:05.847 --rc genhtml_legend=1 00:07:05.847 --rc geninfo_all_blocks=1 00:07:05.847 --rc geninfo_unexecuted_blocks=1 00:07:05.847 00:07:05.847 ' 00:07:05.847 18:24:23 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:05.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.847 --rc genhtml_branch_coverage=1 00:07:05.847 --rc genhtml_function_coverage=1 00:07:05.847 --rc genhtml_legend=1 00:07:05.847 --rc geninfo_all_blocks=1 00:07:05.847 --rc geninfo_unexecuted_blocks=1 00:07:05.847 00:07:05.847 ' 00:07:05.847 18:24:23 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:05.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.847 --rc genhtml_branch_coverage=1 00:07:05.847 --rc genhtml_function_coverage=1 00:07:05.847 --rc genhtml_legend=1 00:07:05.847 --rc geninfo_all_blocks=1 00:07:05.847 --rc geninfo_unexecuted_blocks=1 00:07:05.847 00:07:05.847 ' 00:07:05.847 18:24:23 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:05.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.847 --rc genhtml_branch_coverage=1 00:07:05.847 --rc genhtml_function_coverage=1 00:07:05.847 --rc genhtml_legend=1 00:07:05.847 --rc geninfo_all_blocks=1 00:07:05.847 --rc geninfo_unexecuted_blocks=1 00:07:05.847 00:07:05.847 ' 00:07:05.847 18:24:23 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:05.847 18:24:23 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:05.847 18:24:23 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.847 18:24:23 thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.847 ************************************ 00:07:05.847 START TEST thread_poller_perf 00:07:05.847 ************************************ 00:07:05.847 18:24:23 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:05.847 [2024-12-08 18:24:23.574907] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:05.847 [2024-12-08 18:24:23.575210] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71256 ] 00:07:05.847 [2024-12-08 18:24:23.702598] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.847 [2024-12-08 18:24:23.756004] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.847 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:07.293 [2024-12-08T18:24:25.223Z] ====================================== 00:07:07.293 [2024-12-08T18:24:25.223Z] busy:2210016634 (cyc) 00:07:07.293 [2024-12-08T18:24:25.223Z] total_run_count: 397000 00:07:07.293 [2024-12-08T18:24:25.223Z] tsc_hz: 2200000000 (cyc) 00:07:07.293 [2024-12-08T18:24:25.223Z] ====================================== 00:07:07.293 [2024-12-08T18:24:25.223Z] poller_cost: 5566 (cyc), 2530 (nsec) 00:07:07.293 00:07:07.293 real 0m1.273s 00:07:07.293 user 0m1.112s 00:07:07.293 sys 0m0.055s 00:07:07.293 18:24:24 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.293 18:24:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:07.293 ************************************ 00:07:07.293 END TEST thread_poller_perf 00:07:07.293 ************************************ 00:07:07.293 18:24:24 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:07.293 18:24:24 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:07.293 18:24:24 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.293 18:24:24 thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.293 ************************************ 00:07:07.293 START TEST thread_poller_perf 00:07:07.293 ************************************ 00:07:07.293 18:24:24 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:07.293 [2024-12-08 18:24:24.901237] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:07.293 [2024-12-08 18:24:24.901517] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71286 ] 00:07:07.293 [2024-12-08 18:24:25.034999] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.293 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:07.293 [2024-12-08 18:24:25.085417] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.237 [2024-12-08T18:24:26.167Z] ====================================== 00:07:08.237 [2024-12-08T18:24:26.167Z] busy:2201947868 (cyc) 00:07:08.237 [2024-12-08T18:24:26.167Z] total_run_count: 5242000 00:07:08.237 [2024-12-08T18:24:26.167Z] tsc_hz: 2200000000 (cyc) 00:07:08.237 [2024-12-08T18:24:26.167Z] ====================================== 00:07:08.237 [2024-12-08T18:24:26.167Z] poller_cost: 420 (cyc), 190 (nsec) 00:07:08.237 00:07:08.237 real 0m1.253s 00:07:08.237 user 0m1.096s 00:07:08.237 sys 0m0.051s 00:07:08.237 18:24:26 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.237 ************************************ 00:07:08.237 END TEST thread_poller_perf 00:07:08.237 18:24:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:08.237 ************************************ 00:07:08.497 18:24:26 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:08.497 00:07:08.497 real 0m2.819s 00:07:08.497 user 0m2.350s 00:07:08.497 sys 0m0.251s 00:07:08.497 18:24:26 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.497 18:24:26 thread -- common/autotest_common.sh@10 -- # set +x 00:07:08.497 ************************************ 00:07:08.497 END TEST thread 00:07:08.497 ************************************ 00:07:08.497 18:24:26 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:08.497 18:24:26 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:08.497 18:24:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.497 18:24:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.497 18:24:26 -- common/autotest_common.sh@10 -- # set +x 00:07:08.497 ************************************ 00:07:08.497 START TEST app_cmdline 00:07:08.497 ************************************ 00:07:08.497 18:24:26 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:08.497 * Looking for test storage... 00:07:08.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:08.497 18:24:26 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:08.497 18:24:26 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:08.497 18:24:26 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:08.497 18:24:26 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:08.497 18:24:26 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.497 18:24:26 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.497 18:24:26 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.497 18:24:26 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.497 18:24:26 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.497 18:24:26 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.497 18:24:26 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.497 18:24:26 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.497 18:24:26 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.497 18:24:26 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.497 18:24:26 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.497 18:24:26 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:08.497 18:24:26 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:08.497 18:24:26 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.497 18:24:26 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.497 18:24:26 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:08.497 18:24:26 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:08.497 18:24:26 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.497 18:24:26 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:08.497 18:24:26 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.497 18:24:26 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:08.497 18:24:26 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:08.497 18:24:26 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.497 18:24:26 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:08.497 18:24:26 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.497 18:24:26 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.497 18:24:26 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.497 18:24:26 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:08.497 18:24:26 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.497 18:24:26 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:08.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.497 --rc genhtml_branch_coverage=1 00:07:08.497 --rc genhtml_function_coverage=1 00:07:08.497 --rc genhtml_legend=1 00:07:08.497 --rc geninfo_all_blocks=1 00:07:08.497 --rc geninfo_unexecuted_blocks=1 00:07:08.497 00:07:08.497 ' 00:07:08.497 18:24:26 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:08.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.497 --rc genhtml_branch_coverage=1 00:07:08.497 --rc genhtml_function_coverage=1 00:07:08.497 --rc genhtml_legend=1 00:07:08.497 --rc geninfo_all_blocks=1 00:07:08.497 --rc geninfo_unexecuted_blocks=1 00:07:08.497 00:07:08.497 ' 00:07:08.497 18:24:26 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:08.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.497 --rc genhtml_branch_coverage=1 00:07:08.497 --rc genhtml_function_coverage=1 00:07:08.497 --rc genhtml_legend=1 00:07:08.497 --rc geninfo_all_blocks=1 00:07:08.497 --rc geninfo_unexecuted_blocks=1 00:07:08.497 00:07:08.497 ' 00:07:08.497 18:24:26 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:08.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.497 --rc genhtml_branch_coverage=1 00:07:08.497 --rc genhtml_function_coverage=1 00:07:08.497 --rc genhtml_legend=1 00:07:08.497 --rc geninfo_all_blocks=1 00:07:08.497 --rc geninfo_unexecuted_blocks=1 00:07:08.497 00:07:08.497 ' 00:07:08.497 18:24:26 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:08.497 18:24:26 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71369 00:07:08.497 18:24:26 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:08.497 18:24:26 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71369 00:07:08.497 18:24:26 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71369 ']' 00:07:08.497 18:24:26 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.497 18:24:26 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.497 18:24:26 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.497 18:24:26 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.497 18:24:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:08.757 [2024-12-08 18:24:26.483456] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:08.757 [2024-12-08 18:24:26.483590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71369 ] 00:07:08.757 [2024-12-08 18:24:26.619766] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.757 [2024-12-08 18:24:26.675491] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.017 [2024-12-08 18:24:26.741995] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:09.017 18:24:26 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.017 18:24:26 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:09.017 18:24:26 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:09.279 { 00:07:09.279 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:07:09.279 "fields": { 00:07:09.279 "major": 24, 00:07:09.279 "minor": 9, 00:07:09.279 "patch": 1, 00:07:09.279 "suffix": "-pre", 00:07:09.279 "commit": "b18e1bd62" 00:07:09.279 } 00:07:09.279 } 00:07:09.279 18:24:27 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:09.279 18:24:27 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:09.279 18:24:27 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:09.279 18:24:27 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:09.279 18:24:27 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:09.279 18:24:27 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.279 18:24:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:09.279 18:24:27 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:09.279 18:24:27 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:09.279 18:24:27 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.539 18:24:27 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:09.539 18:24:27 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:09.539 18:24:27 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:09.539 18:24:27 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:09.539 18:24:27 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:09.539 18:24:27 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:09.539 18:24:27 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.539 18:24:27 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:09.539 18:24:27 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.539 18:24:27 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:09.539 18:24:27 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.539 18:24:27 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:09.539 18:24:27 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:09.539 18:24:27 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:09.799 request: 00:07:09.799 { 00:07:09.799 "method": "env_dpdk_get_mem_stats", 00:07:09.799 "req_id": 1 00:07:09.799 } 00:07:09.799 Got JSON-RPC error response 00:07:09.799 response: 00:07:09.799 { 00:07:09.799 "code": -32601, 00:07:09.799 "message": "Method not found" 00:07:09.799 } 00:07:09.799 18:24:27 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:09.799 18:24:27 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:09.799 18:24:27 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:09.799 18:24:27 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:09.799 18:24:27 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71369 00:07:09.799 18:24:27 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71369 ']' 00:07:09.799 18:24:27 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71369 00:07:09.799 18:24:27 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:09.799 18:24:27 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.799 18:24:27 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71369 00:07:09.799 18:24:27 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:09.799 18:24:27 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:09.799 killing process with pid 71369 00:07:09.799 18:24:27 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71369' 00:07:09.799 18:24:27 app_cmdline -- common/autotest_common.sh@969 -- # kill 71369 00:07:09.799 18:24:27 app_cmdline -- common/autotest_common.sh@974 -- # wait 71369 00:07:10.059 00:07:10.059 real 0m1.724s 00:07:10.059 user 0m2.103s 00:07:10.059 sys 0m0.471s 00:07:10.059 18:24:27 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.059 ************************************ 00:07:10.059 END TEST app_cmdline 00:07:10.059 ************************************ 00:07:10.059 18:24:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:10.319 18:24:27 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:10.319 18:24:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.319 18:24:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.319 18:24:27 -- common/autotest_common.sh@10 -- # set +x 00:07:10.319 ************************************ 00:07:10.319 START TEST version 00:07:10.319 ************************************ 00:07:10.319 18:24:28 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:10.319 * Looking for test storage... 00:07:10.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:10.319 18:24:28 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:10.319 18:24:28 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:10.319 18:24:28 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:10.319 18:24:28 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:10.319 18:24:28 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.319 18:24:28 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.319 18:24:28 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.319 18:24:28 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.319 18:24:28 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.319 18:24:28 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.319 18:24:28 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.319 18:24:28 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.319 18:24:28 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.319 18:24:28 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.319 18:24:28 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.319 18:24:28 version -- scripts/common.sh@344 -- # case "$op" in 00:07:10.319 18:24:28 version -- scripts/common.sh@345 -- # : 1 00:07:10.319 18:24:28 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.319 18:24:28 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.320 18:24:28 version -- scripts/common.sh@365 -- # decimal 1 00:07:10.320 18:24:28 version -- scripts/common.sh@353 -- # local d=1 00:07:10.320 18:24:28 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.320 18:24:28 version -- scripts/common.sh@355 -- # echo 1 00:07:10.320 18:24:28 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.320 18:24:28 version -- scripts/common.sh@366 -- # decimal 2 00:07:10.320 18:24:28 version -- scripts/common.sh@353 -- # local d=2 00:07:10.320 18:24:28 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.320 18:24:28 version -- scripts/common.sh@355 -- # echo 2 00:07:10.320 18:24:28 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.320 18:24:28 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.320 18:24:28 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.320 18:24:28 version -- scripts/common.sh@368 -- # return 0 00:07:10.320 18:24:28 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.320 18:24:28 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:10.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.320 --rc genhtml_branch_coverage=1 00:07:10.320 --rc genhtml_function_coverage=1 00:07:10.320 --rc genhtml_legend=1 00:07:10.320 --rc geninfo_all_blocks=1 00:07:10.320 --rc geninfo_unexecuted_blocks=1 00:07:10.320 00:07:10.320 ' 00:07:10.320 18:24:28 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:10.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.320 --rc genhtml_branch_coverage=1 00:07:10.320 --rc genhtml_function_coverage=1 00:07:10.320 --rc genhtml_legend=1 00:07:10.320 --rc geninfo_all_blocks=1 00:07:10.320 --rc geninfo_unexecuted_blocks=1 00:07:10.320 00:07:10.320 ' 00:07:10.320 18:24:28 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:10.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.320 --rc genhtml_branch_coverage=1 00:07:10.320 --rc genhtml_function_coverage=1 00:07:10.320 --rc genhtml_legend=1 00:07:10.320 --rc geninfo_all_blocks=1 00:07:10.320 --rc geninfo_unexecuted_blocks=1 00:07:10.320 00:07:10.320 ' 00:07:10.320 18:24:28 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:10.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.320 --rc genhtml_branch_coverage=1 00:07:10.320 --rc genhtml_function_coverage=1 00:07:10.320 --rc genhtml_legend=1 00:07:10.320 --rc geninfo_all_blocks=1 00:07:10.320 --rc geninfo_unexecuted_blocks=1 00:07:10.320 00:07:10.320 ' 00:07:10.320 18:24:28 version -- app/version.sh@17 -- # get_header_version major 00:07:10.320 18:24:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:10.320 18:24:28 version -- app/version.sh@14 -- # cut -f2 00:07:10.320 18:24:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.320 18:24:28 version -- app/version.sh@17 -- # major=24 00:07:10.320 18:24:28 version -- app/version.sh@18 -- # get_header_version minor 00:07:10.320 18:24:28 version -- app/version.sh@14 -- # cut -f2 00:07:10.320 18:24:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.320 18:24:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:10.320 18:24:28 version -- app/version.sh@18 -- # minor=9 00:07:10.320 18:24:28 version -- app/version.sh@19 -- # get_header_version patch 00:07:10.320 18:24:28 version -- app/version.sh@14 -- # cut -f2 00:07:10.320 18:24:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:10.320 18:24:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.320 18:24:28 version -- app/version.sh@19 -- # patch=1 00:07:10.320 18:24:28 version -- app/version.sh@20 -- # get_header_version suffix 00:07:10.320 18:24:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:10.320 18:24:28 version -- app/version.sh@14 -- # cut -f2 00:07:10.320 18:24:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.320 18:24:28 version -- app/version.sh@20 -- # suffix=-pre 00:07:10.320 18:24:28 version -- app/version.sh@22 -- # version=24.9 00:07:10.320 18:24:28 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:10.320 18:24:28 version -- app/version.sh@25 -- # version=24.9.1 00:07:10.320 18:24:28 version -- app/version.sh@28 -- # version=24.9.1rc0 00:07:10.320 18:24:28 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:10.320 18:24:28 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:10.580 18:24:28 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:07:10.580 18:24:28 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:07:10.580 00:07:10.580 real 0m0.242s 00:07:10.580 user 0m0.159s 00:07:10.580 sys 0m0.120s 00:07:10.580 18:24:28 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.580 18:24:28 version -- common/autotest_common.sh@10 -- # set +x 00:07:10.580 ************************************ 00:07:10.580 END TEST version 00:07:10.580 ************************************ 00:07:10.580 18:24:28 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:10.580 18:24:28 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:10.580 18:24:28 -- spdk/autotest.sh@194 -- # uname -s 00:07:10.580 18:24:28 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:10.580 18:24:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:10.580 18:24:28 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:07:10.580 18:24:28 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:07:10.580 18:24:28 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:10.580 18:24:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.580 18:24:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.580 18:24:28 -- common/autotest_common.sh@10 -- # set +x 00:07:10.580 ************************************ 00:07:10.580 START TEST spdk_dd 00:07:10.580 ************************************ 00:07:10.580 18:24:28 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:10.580 * Looking for test storage... 00:07:10.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:10.580 18:24:28 spdk_dd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:10.580 18:24:28 spdk_dd -- common/autotest_common.sh@1681 -- # lcov --version 00:07:10.580 18:24:28 spdk_dd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:10.580 18:24:28 spdk_dd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@345 -- # : 1 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@368 -- # return 0 00:07:10.580 18:24:28 spdk_dd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.580 18:24:28 spdk_dd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:10.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.580 --rc genhtml_branch_coverage=1 00:07:10.580 --rc genhtml_function_coverage=1 00:07:10.580 --rc genhtml_legend=1 00:07:10.580 --rc geninfo_all_blocks=1 00:07:10.580 --rc geninfo_unexecuted_blocks=1 00:07:10.580 00:07:10.580 ' 00:07:10.580 18:24:28 spdk_dd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:10.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.580 --rc genhtml_branch_coverage=1 00:07:10.580 --rc genhtml_function_coverage=1 00:07:10.580 --rc genhtml_legend=1 00:07:10.580 --rc geninfo_all_blocks=1 00:07:10.580 --rc geninfo_unexecuted_blocks=1 00:07:10.580 00:07:10.580 ' 00:07:10.580 18:24:28 spdk_dd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:10.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.580 --rc genhtml_branch_coverage=1 00:07:10.580 --rc genhtml_function_coverage=1 00:07:10.580 --rc genhtml_legend=1 00:07:10.580 --rc geninfo_all_blocks=1 00:07:10.580 --rc geninfo_unexecuted_blocks=1 00:07:10.580 00:07:10.580 ' 00:07:10.580 18:24:28 spdk_dd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:10.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.580 --rc genhtml_branch_coverage=1 00:07:10.580 --rc genhtml_function_coverage=1 00:07:10.580 --rc genhtml_legend=1 00:07:10.580 --rc geninfo_all_blocks=1 00:07:10.580 --rc geninfo_unexecuted_blocks=1 00:07:10.580 00:07:10.580 ' 00:07:10.580 18:24:28 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:10.580 18:24:28 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:07:10.581 18:24:28 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.581 18:24:28 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.581 18:24:28 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.581 18:24:28 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.581 18:24:28 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.581 18:24:28 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.581 18:24:28 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:10.581 18:24:28 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.581 18:24:28 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:11.151 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:11.151 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:11.151 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:11.151 18:24:28 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:11.152 18:24:28 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@233 -- # local class 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@235 -- # local progif 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@236 -- # class=01 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:07:11.152 18:24:28 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:11.152 18:24:28 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@139 -- # local lib 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.14.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.152 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.1.0 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.23 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.23 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.23 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.23 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.23 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.23 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.23 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.23 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.23 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.23 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.23 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.23 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.23 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.23 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.23 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.23 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.23 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:11.153 * spdk_dd linked to liburing 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:11.153 18:24:28 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:11.153 18:24:28 spdk_dd -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=n 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=y 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@75 -- # CONFIG_FC=n 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:07:11.154 18:24:28 spdk_dd -- common/build_config.sh@89 -- # CONFIG_URING=y 00:07:11.154 18:24:28 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:11.154 18:24:28 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:07:11.154 18:24:28 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:07:11.154 18:24:28 spdk_dd -- dd/common.sh@153 -- # return 0 00:07:11.154 18:24:28 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:11.154 18:24:28 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:11.154 18:24:28 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:11.154 18:24:28 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.154 18:24:28 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:11.154 ************************************ 00:07:11.154 START TEST spdk_dd_basic_rw 00:07:11.154 ************************************ 00:07:11.154 18:24:28 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:11.154 * Looking for test storage... 00:07:11.154 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:11.154 18:24:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:11.154 18:24:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lcov --version 00:07:11.154 18:24:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:11.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.414 --rc genhtml_branch_coverage=1 00:07:11.414 --rc genhtml_function_coverage=1 00:07:11.414 --rc genhtml_legend=1 00:07:11.414 --rc geninfo_all_blocks=1 00:07:11.414 --rc geninfo_unexecuted_blocks=1 00:07:11.414 00:07:11.414 ' 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:11.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.414 --rc genhtml_branch_coverage=1 00:07:11.414 --rc genhtml_function_coverage=1 00:07:11.414 --rc genhtml_legend=1 00:07:11.414 --rc geninfo_all_blocks=1 00:07:11.414 --rc geninfo_unexecuted_blocks=1 00:07:11.414 00:07:11.414 ' 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:11.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.414 --rc genhtml_branch_coverage=1 00:07:11.414 --rc genhtml_function_coverage=1 00:07:11.414 --rc genhtml_legend=1 00:07:11.414 --rc geninfo_all_blocks=1 00:07:11.414 --rc geninfo_unexecuted_blocks=1 00:07:11.414 00:07:11.414 ' 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:11.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.414 --rc genhtml_branch_coverage=1 00:07:11.414 --rc genhtml_function_coverage=1 00:07:11.414 --rc genhtml_legend=1 00:07:11.414 --rc geninfo_all_blocks=1 00:07:11.414 --rc geninfo_unexecuted_blocks=1 00:07:11.414 00:07:11.414 ' 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.414 18:24:29 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:11.415 18:24:29 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.415 18:24:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:11.415 18:24:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:11.415 18:24:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:11.415 18:24:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:11.415 18:24:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:11.415 18:24:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:11.415 18:24:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:11.415 18:24:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:11.415 18:24:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:11.415 18:24:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:11.415 18:24:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:11.415 18:24:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:11.415 18:24:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:11.677 18:24:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:11.677 18:24:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:11.677 18:24:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:11.678 18:24:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:11.678 18:24:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:11.678 18:24:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:11.678 18:24:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:11.678 18:24:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:11.678 18:24:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:11.678 18:24:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:11.678 18:24:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:11.678 18:24:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.678 18:24:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:11.678 18:24:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:11.678 ************************************ 00:07:11.678 START TEST dd_bs_lt_native_bs 00:07:11.678 ************************************ 00:07:11.678 18:24:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:11.678 18:24:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:07:11.678 18:24:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:11.678 18:24:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.678 18:24:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.678 18:24:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.678 18:24:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.678 18:24:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.678 18:24:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.678 18:24:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.678 18:24:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:11.678 18:24:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:11.678 { 00:07:11.678 "subsystems": [ 00:07:11.678 { 00:07:11.678 "subsystem": "bdev", 00:07:11.678 "config": [ 00:07:11.678 { 00:07:11.678 "params": { 00:07:11.678 "trtype": "pcie", 00:07:11.678 "traddr": "0000:00:10.0", 00:07:11.678 "name": "Nvme0" 00:07:11.678 }, 00:07:11.678 "method": "bdev_nvme_attach_controller" 00:07:11.678 }, 00:07:11.678 { 00:07:11.678 "method": "bdev_wait_for_examine" 00:07:11.678 } 00:07:11.678 ] 00:07:11.678 } 00:07:11.678 ] 00:07:11.678 } 00:07:11.678 [2024-12-08 18:24:29.437858] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:11.678 [2024-12-08 18:24:29.437946] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71713 ] 00:07:11.678 [2024-12-08 18:24:29.573278] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.938 [2024-12-08 18:24:29.643127] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.938 [2024-12-08 18:24:29.699180] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.938 [2024-12-08 18:24:29.805710] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:11.938 [2024-12-08 18:24:29.805980] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:12.196 [2024-12-08 18:24:29.921649] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:12.196 18:24:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:07:12.196 18:24:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:12.196 18:24:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:07:12.196 18:24:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:07:12.197 18:24:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:07:12.197 18:24:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:12.197 00:07:12.197 real 0m0.606s 00:07:12.197 user 0m0.384s 00:07:12.197 sys 0m0.173s 00:07:12.197 18:24:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.197 18:24:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:12.197 ************************************ 00:07:12.197 END TEST dd_bs_lt_native_bs 00:07:12.197 ************************************ 00:07:12.197 18:24:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:12.197 18:24:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:12.197 18:24:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.197 18:24:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:12.197 ************************************ 00:07:12.197 START TEST dd_rw 00:07:12.197 ************************************ 00:07:12.197 18:24:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:07:12.197 18:24:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:12.197 18:24:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:12.197 18:24:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:12.197 18:24:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:12.197 18:24:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:12.197 18:24:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:12.197 18:24:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:12.197 18:24:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:12.197 18:24:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:12.197 18:24:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:12.197 18:24:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:12.197 18:24:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:12.197 18:24:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:12.197 18:24:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:12.197 18:24:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:12.197 18:24:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:12.197 18:24:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:12.197 18:24:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:12.765 18:24:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:12.765 18:24:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:12.765 18:24:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:12.765 18:24:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:12.765 [2024-12-08 18:24:30.654997] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:12.765 [2024-12-08 18:24:30.655250] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71744 ] 00:07:12.765 { 00:07:12.765 "subsystems": [ 00:07:12.765 { 00:07:12.765 "subsystem": "bdev", 00:07:12.765 "config": [ 00:07:12.765 { 00:07:12.765 "params": { 00:07:12.765 "trtype": "pcie", 00:07:12.765 "traddr": "0000:00:10.0", 00:07:12.765 "name": "Nvme0" 00:07:12.765 }, 00:07:12.765 "method": "bdev_nvme_attach_controller" 00:07:12.765 }, 00:07:12.765 { 00:07:12.765 "method": "bdev_wait_for_examine" 00:07:12.765 } 00:07:12.765 ] 00:07:12.765 } 00:07:12.765 ] 00:07:12.765 } 00:07:13.024 [2024-12-08 18:24:30.794832] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.024 [2024-12-08 18:24:30.861872] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.024 [2024-12-08 18:24:30.920293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.283  [2024-12-08T18:24:31.213Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:13.283 00:07:13.542 18:24:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:13.542 18:24:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:13.542 18:24:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:13.542 18:24:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:13.542 [2024-12-08 18:24:31.273524] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:13.542 [2024-12-08 18:24:31.273825] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71763 ] 00:07:13.542 { 00:07:13.542 "subsystems": [ 00:07:13.542 { 00:07:13.542 "subsystem": "bdev", 00:07:13.542 "config": [ 00:07:13.542 { 00:07:13.542 "params": { 00:07:13.542 "trtype": "pcie", 00:07:13.542 "traddr": "0000:00:10.0", 00:07:13.542 "name": "Nvme0" 00:07:13.542 }, 00:07:13.542 "method": "bdev_nvme_attach_controller" 00:07:13.542 }, 00:07:13.542 { 00:07:13.542 "method": "bdev_wait_for_examine" 00:07:13.542 } 00:07:13.542 ] 00:07:13.542 } 00:07:13.542 ] 00:07:13.542 } 00:07:13.542 [2024-12-08 18:24:31.408345] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.542 [2024-12-08 18:24:31.461861] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.801 [2024-12-08 18:24:31.513159] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.801  [2024-12-08T18:24:31.990Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:14.060 00:07:14.060 18:24:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:14.060 18:24:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:14.060 18:24:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:14.060 18:24:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:14.060 18:24:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:14.060 18:24:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:14.060 18:24:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:14.060 18:24:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:14.060 18:24:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:14.060 18:24:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:14.060 18:24:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:14.060 [2024-12-08 18:24:31.858920] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:14.060 [2024-12-08 18:24:31.859006] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71778 ] 00:07:14.060 { 00:07:14.060 "subsystems": [ 00:07:14.060 { 00:07:14.060 "subsystem": "bdev", 00:07:14.060 "config": [ 00:07:14.060 { 00:07:14.060 "params": { 00:07:14.060 "trtype": "pcie", 00:07:14.060 "traddr": "0000:00:10.0", 00:07:14.060 "name": "Nvme0" 00:07:14.060 }, 00:07:14.060 "method": "bdev_nvme_attach_controller" 00:07:14.060 }, 00:07:14.060 { 00:07:14.060 "method": "bdev_wait_for_examine" 00:07:14.060 } 00:07:14.060 ] 00:07:14.060 } 00:07:14.060 ] 00:07:14.060 } 00:07:14.320 [2024-12-08 18:24:31.995984] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.320 [2024-12-08 18:24:32.050074] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.320 [2024-12-08 18:24:32.100878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.320  [2024-12-08T18:24:32.509Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:14.579 00:07:14.579 18:24:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:14.579 18:24:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:14.579 18:24:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:14.579 18:24:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:14.579 18:24:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:14.579 18:24:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:14.579 18:24:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:15.148 18:24:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:15.148 18:24:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:15.148 18:24:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:15.148 18:24:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:15.148 [2024-12-08 18:24:33.053333] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:15.148 [2024-12-08 18:24:33.053450] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71798 ] 00:07:15.148 { 00:07:15.148 "subsystems": [ 00:07:15.148 { 00:07:15.148 "subsystem": "bdev", 00:07:15.148 "config": [ 00:07:15.148 { 00:07:15.148 "params": { 00:07:15.148 "trtype": "pcie", 00:07:15.148 "traddr": "0000:00:10.0", 00:07:15.148 "name": "Nvme0" 00:07:15.148 }, 00:07:15.148 "method": "bdev_nvme_attach_controller" 00:07:15.148 }, 00:07:15.148 { 00:07:15.148 "method": "bdev_wait_for_examine" 00:07:15.148 } 00:07:15.148 ] 00:07:15.148 } 00:07:15.148 ] 00:07:15.148 } 00:07:15.407 [2024-12-08 18:24:33.187296] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.407 [2024-12-08 18:24:33.241854] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.407 [2024-12-08 18:24:33.293650] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:15.667  [2024-12-08T18:24:33.597Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:15.667 00:07:15.667 18:24:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:15.667 18:24:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:15.667 18:24:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:15.667 18:24:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:15.927 [2024-12-08 18:24:33.629721] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:15.927 [2024-12-08 18:24:33.629812] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71812 ] 00:07:15.927 { 00:07:15.927 "subsystems": [ 00:07:15.927 { 00:07:15.927 "subsystem": "bdev", 00:07:15.927 "config": [ 00:07:15.927 { 00:07:15.927 "params": { 00:07:15.927 "trtype": "pcie", 00:07:15.927 "traddr": "0000:00:10.0", 00:07:15.927 "name": "Nvme0" 00:07:15.927 }, 00:07:15.927 "method": "bdev_nvme_attach_controller" 00:07:15.927 }, 00:07:15.927 { 00:07:15.927 "method": "bdev_wait_for_examine" 00:07:15.927 } 00:07:15.927 ] 00:07:15.927 } 00:07:15.927 ] 00:07:15.927 } 00:07:15.927 [2024-12-08 18:24:33.762415] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.927 [2024-12-08 18:24:33.827011] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.186 [2024-12-08 18:24:33.878419] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.186  [2024-12-08T18:24:34.376Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:16.446 00:07:16.446 18:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:16.446 18:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:16.446 18:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:16.446 18:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:16.446 18:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:16.446 18:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:16.446 18:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:16.446 18:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:16.446 18:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:16.446 18:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:16.446 18:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:16.446 [2024-12-08 18:24:34.228414] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:16.446 [2024-12-08 18:24:34.228533] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71828 ] 00:07:16.446 { 00:07:16.446 "subsystems": [ 00:07:16.446 { 00:07:16.446 "subsystem": "bdev", 00:07:16.446 "config": [ 00:07:16.446 { 00:07:16.446 "params": { 00:07:16.446 "trtype": "pcie", 00:07:16.446 "traddr": "0000:00:10.0", 00:07:16.446 "name": "Nvme0" 00:07:16.446 }, 00:07:16.446 "method": "bdev_nvme_attach_controller" 00:07:16.446 }, 00:07:16.446 { 00:07:16.446 "method": "bdev_wait_for_examine" 00:07:16.446 } 00:07:16.446 ] 00:07:16.446 } 00:07:16.446 ] 00:07:16.446 } 00:07:16.446 [2024-12-08 18:24:34.364805] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.705 [2024-12-08 18:24:34.422290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.705 [2024-12-08 18:24:34.473102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.705  [2024-12-08T18:24:34.895Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:16.965 00:07:16.965 18:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:16.965 18:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:16.965 18:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:16.965 18:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:16.965 18:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:16.965 18:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:16.965 18:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:16.965 18:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:17.533 18:24:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:17.533 18:24:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:17.533 18:24:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:17.533 18:24:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:17.533 [2024-12-08 18:24:35.331538] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:17.533 [2024-12-08 18:24:35.331659] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71852 ] 00:07:17.533 { 00:07:17.533 "subsystems": [ 00:07:17.533 { 00:07:17.533 "subsystem": "bdev", 00:07:17.533 "config": [ 00:07:17.533 { 00:07:17.533 "params": { 00:07:17.533 "trtype": "pcie", 00:07:17.533 "traddr": "0000:00:10.0", 00:07:17.533 "name": "Nvme0" 00:07:17.533 }, 00:07:17.533 "method": "bdev_nvme_attach_controller" 00:07:17.533 }, 00:07:17.533 { 00:07:17.533 "method": "bdev_wait_for_examine" 00:07:17.533 } 00:07:17.533 ] 00:07:17.533 } 00:07:17.533 ] 00:07:17.533 } 00:07:17.791 [2024-12-08 18:24:35.468038] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.791 [2024-12-08 18:24:35.529943] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.791 [2024-12-08 18:24:35.582413] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:17.791  [2024-12-08T18:24:35.980Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:18.050 00:07:18.050 18:24:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:18.050 18:24:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:18.050 18:24:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:18.050 18:24:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:18.050 [2024-12-08 18:24:35.926524] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:18.050 [2024-12-08 18:24:35.926631] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71864 ] 00:07:18.050 { 00:07:18.050 "subsystems": [ 00:07:18.050 { 00:07:18.050 "subsystem": "bdev", 00:07:18.050 "config": [ 00:07:18.050 { 00:07:18.050 "params": { 00:07:18.050 "trtype": "pcie", 00:07:18.050 "traddr": "0000:00:10.0", 00:07:18.050 "name": "Nvme0" 00:07:18.050 }, 00:07:18.050 "method": "bdev_nvme_attach_controller" 00:07:18.050 }, 00:07:18.050 { 00:07:18.050 "method": "bdev_wait_for_examine" 00:07:18.050 } 00:07:18.050 ] 00:07:18.050 } 00:07:18.050 ] 00:07:18.050 } 00:07:18.309 [2024-12-08 18:24:36.063678] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.309 [2024-12-08 18:24:36.137584] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.309 [2024-12-08 18:24:36.190975] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.568  [2024-12-08T18:24:36.758Z] Copying: 56/56 [kB] (average 27 MBps) 00:07:18.828 00:07:18.828 18:24:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:18.828 18:24:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:18.828 18:24:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:18.828 18:24:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:18.828 18:24:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:18.828 18:24:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:18.828 18:24:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:18.828 18:24:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:18.828 18:24:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:18.828 18:24:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:18.828 18:24:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:18.828 [2024-12-08 18:24:36.558306] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:18.828 [2024-12-08 18:24:36.558444] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71881 ] 00:07:18.828 { 00:07:18.828 "subsystems": [ 00:07:18.828 { 00:07:18.828 "subsystem": "bdev", 00:07:18.828 "config": [ 00:07:18.828 { 00:07:18.828 "params": { 00:07:18.828 "trtype": "pcie", 00:07:18.828 "traddr": "0000:00:10.0", 00:07:18.828 "name": "Nvme0" 00:07:18.828 }, 00:07:18.828 "method": "bdev_nvme_attach_controller" 00:07:18.828 }, 00:07:18.828 { 00:07:18.828 "method": "bdev_wait_for_examine" 00:07:18.828 } 00:07:18.828 ] 00:07:18.828 } 00:07:18.828 ] 00:07:18.828 } 00:07:18.828 [2024-12-08 18:24:36.686483] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.828 [2024-12-08 18:24:36.746491] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.144 [2024-12-08 18:24:36.798636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.144  [2024-12-08T18:24:37.367Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:19.437 00:07:19.437 18:24:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:19.437 18:24:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:19.437 18:24:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:19.437 18:24:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:19.437 18:24:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:19.437 18:24:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:19.437 18:24:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:19.697 18:24:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:19.697 18:24:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:19.697 18:24:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:19.697 18:24:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:19.957 [2024-12-08 18:24:37.653505] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:19.957 [2024-12-08 18:24:37.653603] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71900 ] 00:07:19.957 { 00:07:19.957 "subsystems": [ 00:07:19.957 { 00:07:19.957 "subsystem": "bdev", 00:07:19.957 "config": [ 00:07:19.957 { 00:07:19.957 "params": { 00:07:19.957 "trtype": "pcie", 00:07:19.957 "traddr": "0000:00:10.0", 00:07:19.957 "name": "Nvme0" 00:07:19.957 }, 00:07:19.957 "method": "bdev_nvme_attach_controller" 00:07:19.957 }, 00:07:19.957 { 00:07:19.957 "method": "bdev_wait_for_examine" 00:07:19.957 } 00:07:19.957 ] 00:07:19.957 } 00:07:19.957 ] 00:07:19.957 } 00:07:19.957 [2024-12-08 18:24:37.790954] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.957 [2024-12-08 18:24:37.851904] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.216 [2024-12-08 18:24:37.904724] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.216  [2024-12-08T18:24:38.406Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:20.476 00:07:20.476 18:24:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:20.476 18:24:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:20.476 18:24:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:20.476 18:24:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:20.476 [2024-12-08 18:24:38.266118] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:20.476 [2024-12-08 18:24:38.266218] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71913 ] 00:07:20.476 { 00:07:20.476 "subsystems": [ 00:07:20.476 { 00:07:20.476 "subsystem": "bdev", 00:07:20.476 "config": [ 00:07:20.476 { 00:07:20.476 "params": { 00:07:20.476 "trtype": "pcie", 00:07:20.476 "traddr": "0000:00:10.0", 00:07:20.476 "name": "Nvme0" 00:07:20.476 }, 00:07:20.476 "method": "bdev_nvme_attach_controller" 00:07:20.476 }, 00:07:20.476 { 00:07:20.476 "method": "bdev_wait_for_examine" 00:07:20.476 } 00:07:20.476 ] 00:07:20.476 } 00:07:20.476 ] 00:07:20.476 } 00:07:20.476 [2024-12-08 18:24:38.404090] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.737 [2024-12-08 18:24:38.475019] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.737 [2024-12-08 18:24:38.530550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.737  [2024-12-08T18:24:38.926Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:20.996 00:07:20.996 18:24:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:20.996 18:24:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:20.996 18:24:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:20.996 18:24:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:20.996 18:24:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:20.997 18:24:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:20.997 18:24:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:20.997 18:24:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:20.997 18:24:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:20.997 18:24:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:20.997 18:24:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:20.997 [2024-12-08 18:24:38.868656] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:20.997 [2024-12-08 18:24:38.868749] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71929 ] 00:07:20.997 { 00:07:20.997 "subsystems": [ 00:07:20.997 { 00:07:20.997 "subsystem": "bdev", 00:07:20.997 "config": [ 00:07:20.997 { 00:07:20.997 "params": { 00:07:20.997 "trtype": "pcie", 00:07:20.997 "traddr": "0000:00:10.0", 00:07:20.997 "name": "Nvme0" 00:07:20.997 }, 00:07:20.997 "method": "bdev_nvme_attach_controller" 00:07:20.997 }, 00:07:20.997 { 00:07:20.997 "method": "bdev_wait_for_examine" 00:07:20.997 } 00:07:20.997 ] 00:07:20.997 } 00:07:20.997 ] 00:07:20.997 } 00:07:21.255 [2024-12-08 18:24:39.004385] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.255 [2024-12-08 18:24:39.077212] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.255 [2024-12-08 18:24:39.131683] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.514  [2024-12-08T18:24:39.444Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:21.514 00:07:21.514 18:24:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:21.514 18:24:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:21.514 18:24:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:21.514 18:24:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:21.514 18:24:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:21.514 18:24:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:21.514 18:24:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:21.514 18:24:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:22.081 18:24:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:22.081 18:24:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:22.081 18:24:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:22.081 18:24:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:22.081 [2024-12-08 18:24:39.946087] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:22.081 [2024-12-08 18:24:39.946194] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71948 ] 00:07:22.081 { 00:07:22.081 "subsystems": [ 00:07:22.081 { 00:07:22.081 "subsystem": "bdev", 00:07:22.081 "config": [ 00:07:22.081 { 00:07:22.081 "params": { 00:07:22.081 "trtype": "pcie", 00:07:22.081 "traddr": "0000:00:10.0", 00:07:22.081 "name": "Nvme0" 00:07:22.081 }, 00:07:22.081 "method": "bdev_nvme_attach_controller" 00:07:22.081 }, 00:07:22.081 { 00:07:22.081 "method": "bdev_wait_for_examine" 00:07:22.081 } 00:07:22.081 ] 00:07:22.081 } 00:07:22.081 ] 00:07:22.081 } 00:07:22.340 [2024-12-08 18:24:40.083405] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.340 [2024-12-08 18:24:40.159966] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.340 [2024-12-08 18:24:40.212152] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.600  [2024-12-08T18:24:40.530Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:22.600 00:07:22.600 18:24:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:22.600 18:24:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:22.600 18:24:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:22.600 18:24:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:22.879 { 00:07:22.879 "subsystems": [ 00:07:22.879 { 00:07:22.879 "subsystem": "bdev", 00:07:22.879 "config": [ 00:07:22.879 { 00:07:22.879 "params": { 00:07:22.879 "trtype": "pcie", 00:07:22.879 "traddr": "0000:00:10.0", 00:07:22.879 "name": "Nvme0" 00:07:22.879 }, 00:07:22.879 "method": "bdev_nvme_attach_controller" 00:07:22.879 }, 00:07:22.879 { 00:07:22.879 "method": "bdev_wait_for_examine" 00:07:22.879 } 00:07:22.879 ] 00:07:22.879 } 00:07:22.879 ] 00:07:22.879 } 00:07:22.879 [2024-12-08 18:24:40.552186] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:22.879 [2024-12-08 18:24:40.552292] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71967 ] 00:07:22.879 [2024-12-08 18:24:40.688246] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.879 [2024-12-08 18:24:40.753318] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.879 [2024-12-08 18:24:40.806550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.137  [2024-12-08T18:24:41.327Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:23.397 00:07:23.397 18:24:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:23.397 18:24:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:23.397 18:24:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:23.397 18:24:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:23.397 18:24:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:23.397 18:24:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:23.397 18:24:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:23.397 18:24:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:23.397 18:24:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:23.397 18:24:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:23.397 18:24:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:23.397 [2024-12-08 18:24:41.153046] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:23.397 [2024-12-08 18:24:41.153140] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71981 ] 00:07:23.397 { 00:07:23.397 "subsystems": [ 00:07:23.397 { 00:07:23.397 "subsystem": "bdev", 00:07:23.397 "config": [ 00:07:23.397 { 00:07:23.397 "params": { 00:07:23.397 "trtype": "pcie", 00:07:23.397 "traddr": "0000:00:10.0", 00:07:23.397 "name": "Nvme0" 00:07:23.397 }, 00:07:23.397 "method": "bdev_nvme_attach_controller" 00:07:23.397 }, 00:07:23.397 { 00:07:23.397 "method": "bdev_wait_for_examine" 00:07:23.397 } 00:07:23.397 ] 00:07:23.397 } 00:07:23.397 ] 00:07:23.397 } 00:07:23.397 [2024-12-08 18:24:41.282666] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.655 [2024-12-08 18:24:41.358692] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.655 [2024-12-08 18:24:41.413995] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.655  [2024-12-08T18:24:41.844Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:23.914 00:07:23.914 18:24:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:23.914 18:24:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:23.914 18:24:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:23.914 18:24:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:23.914 18:24:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:23.914 18:24:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:23.914 18:24:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:24.479 18:24:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:24.479 18:24:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:24.479 18:24:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:24.479 18:24:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:24.479 [2024-12-08 18:24:42.222738] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:24.479 [2024-12-08 18:24:42.222846] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72001 ] 00:07:24.479 { 00:07:24.479 "subsystems": [ 00:07:24.479 { 00:07:24.479 "subsystem": "bdev", 00:07:24.479 "config": [ 00:07:24.479 { 00:07:24.479 "params": { 00:07:24.479 "trtype": "pcie", 00:07:24.479 "traddr": "0000:00:10.0", 00:07:24.479 "name": "Nvme0" 00:07:24.479 }, 00:07:24.479 "method": "bdev_nvme_attach_controller" 00:07:24.479 }, 00:07:24.479 { 00:07:24.479 "method": "bdev_wait_for_examine" 00:07:24.479 } 00:07:24.479 ] 00:07:24.479 } 00:07:24.479 ] 00:07:24.479 } 00:07:24.479 [2024-12-08 18:24:42.362719] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.737 [2024-12-08 18:24:42.441118] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.737 [2024-12-08 18:24:42.502909] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.737  [2024-12-08T18:24:42.925Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:24.995 00:07:24.995 18:24:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:24.995 18:24:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:24.995 18:24:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:24.995 18:24:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:24.995 [2024-12-08 18:24:42.859681] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:24.995 [2024-12-08 18:24:42.859931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72015 ] 00:07:24.995 { 00:07:24.995 "subsystems": [ 00:07:24.995 { 00:07:24.995 "subsystem": "bdev", 00:07:24.995 "config": [ 00:07:24.995 { 00:07:24.995 "params": { 00:07:24.995 "trtype": "pcie", 00:07:24.995 "traddr": "0000:00:10.0", 00:07:24.995 "name": "Nvme0" 00:07:24.995 }, 00:07:24.995 "method": "bdev_nvme_attach_controller" 00:07:24.995 }, 00:07:24.995 { 00:07:24.995 "method": "bdev_wait_for_examine" 00:07:24.995 } 00:07:24.995 ] 00:07:24.995 } 00:07:24.995 ] 00:07:24.995 } 00:07:25.252 [2024-12-08 18:24:42.995412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.252 [2024-12-08 18:24:43.068641] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.252 [2024-12-08 18:24:43.126170] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.510  [2024-12-08T18:24:43.441Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:25.511 00:07:25.511 18:24:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:25.511 18:24:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:25.511 18:24:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:25.511 18:24:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:25.511 18:24:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:25.511 18:24:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:25.511 18:24:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:25.511 18:24:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:25.511 18:24:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:25.511 18:24:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:25.511 18:24:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:25.769 [2024-12-08 18:24:43.467841] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:25.769 [2024-12-08 18:24:43.467944] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72036 ] 00:07:25.769 { 00:07:25.769 "subsystems": [ 00:07:25.769 { 00:07:25.769 "subsystem": "bdev", 00:07:25.769 "config": [ 00:07:25.769 { 00:07:25.769 "params": { 00:07:25.770 "trtype": "pcie", 00:07:25.770 "traddr": "0000:00:10.0", 00:07:25.770 "name": "Nvme0" 00:07:25.770 }, 00:07:25.770 "method": "bdev_nvme_attach_controller" 00:07:25.770 }, 00:07:25.770 { 00:07:25.770 "method": "bdev_wait_for_examine" 00:07:25.770 } 00:07:25.770 ] 00:07:25.770 } 00:07:25.770 ] 00:07:25.770 } 00:07:25.770 [2024-12-08 18:24:43.598943] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.770 [2024-12-08 18:24:43.656377] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.031 [2024-12-08 18:24:43.706680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.031  [2024-12-08T18:24:44.221Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:26.291 00:07:26.291 00:07:26.291 real 0m13.966s 00:07:26.291 user 0m9.978s 00:07:26.291 sys 0m5.385s 00:07:26.291 18:24:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.291 18:24:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:26.291 ************************************ 00:07:26.291 END TEST dd_rw 00:07:26.291 ************************************ 00:07:26.291 18:24:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:26.291 18:24:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.291 18:24:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.291 18:24:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:26.291 ************************************ 00:07:26.291 START TEST dd_rw_offset 00:07:26.291 ************************************ 00:07:26.292 18:24:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:07:26.292 18:24:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:26.292 18:24:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:26.292 18:24:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:26.292 18:24:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:26.292 18:24:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:26.292 18:24:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=2kyqaz3vih8efuqx0r8cixwypdmnryw45tqjymiqst9hpzxibecnhtyzwkxl3kdazjnew387v7eg86oj5zq71clcau641cd7vcxkoxxhmm2ry5wyoqsgo4o97ujtsgyz12ovhof1vhl1gt04u7afil2xofcqvi2c6rnwjxrvxjxqm9fz7mtzwdlxzvdoyrq5miqa3akjxvdlqbmhvf9ih7b7ci5uw194u8kgw7c9dwt0hn0zov0mmiqs9s4vqn4t4veh07sg1tispml8u229y564ijyyimraah3nfjmsm5eextgf778l4xdq4mjufvmzp16rly5fm0sg3svjcojh1glqk8kni1ivdju5ujkw1u2m5cqsv5glr3g0ekoxgi1eki8b3b2jp69v4v9dh4hcwid8d0a1iqt4e71g8p82apkytcxfm8ck2jexwgovo7zx5q7rj22tfy329u2591aud19tg5yc3zs28hzvw3lpmcnkw5pj1pf88474zz8utfj8xwiog367rpvo9zsi6rr7mpbs44z3ono7nth8xtm73pfawstcb8ynf97f6rwll79upr7ku30m8stjq324le51jc2oysqpndns5hvq0lnlnd7j9kv3w4yqypuevb1yqi4jjjae6axwghsxspjeks74p14xy6i7ifacysc96ti70uh52iwcqdskdxbfomnf90ehvfzwp818fi95h9vufbr9z564chfvcwpls7o0swf4filci3vwfan8yz86ne1bg7pscb0bzsw308g9uwurb952h7qzjh1tfyb60xcf2w75ik4j532l309iwycoh7wh4pi8qy7d8k86ei2b8iw42auwstdxtjqs7rnu9m8xpto83kb2scs10rqtrzqadv2k9qqfgi74mdgp4theagahclu3fvrfljqdshpcbtsdqj76tpk5wtcm6p1vqbs0d13vf7rjxe3gv3qki6e3p0z8o0ug1v8jo1ywn976914muzjr1eh7o2q8cgws3wtkh5gcafka1x2a0wta40zq9ka4kxliz6l90wkw2intqnb597ypo961mm1fzd4kwu45m5j8e8b27bv8m859y5tcafi9840htitrxdck2ic6dg275nk9mblexzjs3rl1aup3sk4zdjpjxtlq30ydzq8tmd4dey5vej8detu483pp0hgjkfc5m1ffeyn4ncswhc4eq4etpntlkjvmkopaeude7toogky4e5m5kkzgrl7g667tekaj4z3z5ydd3tm86amblt8jnvfwznyeh1py1l3mnef3j1ajowycsqrn7pkysz7g9divydemgv4ibokz9ghau8crwescl7g9ys1qm5gyz86r7liri0u8iuptjt9d9g5vrhisv1iklnsefw6ohv59p5zznmj6ufo17y0987ep7xjy00lvdvc86clb4qr3fk13sb5bmskryxadx4zjahc1v9rdl3gxx8jkyfuynv0zn09qiaxyjtsy304kho9ve36b32l3j9d9tjlukux41xs77wmnu41uc69cib48164i71jedq5zxk1uq9n66oqd8zt3bx2tag8flk7etx632l0zryogpghxhjxk2woij0phh0spa8z4io9t291jcsr8e7l5j0lzr27yy2cn160ohf5nj8z1sqb3ak8pgr3o2xo592wpxzei1wlxuf2w7owwqpvxot1gi4csavxfvh6f0zr8w7kn88j0kgv934q8iy7o6mkiju2ek4pv8swfu9b8wumca79gcr9rw99ttfeunoj16ys27wh0s0z4des4s09z2pdpisn0krf6ggnxjvy4s75wgo27k7fal5isw00cb2nccf1g7ee918dn3xmyebix51t0gfn5plno5yhhktjxebi75jeqeq7jixf8f3p6xicm8wr89olli7usu0naagiq9x5hqq7iwtr5hc90vbcwsu178s39vn2trot5pbbt7z2qeu37djdke38v7idh27cssq3qs2bjzvefb9ewtch9bzrtdk9tlv0ftv8rnw6m8rkzwei30da1lsj24209lyu3uuv7m188izu3jk2scc6iwpv4q1hyyyi1qnd2z1rdevam5zn6ifxftz0z3j0ekamv32x0nqtsqffcvwdol7wlcyqawl82ldplm7mkfbj02y2dxxzjpm7t9xwcn2l61hnr86q6vfi4469uclqe3xsq9dzvq74gcu0ki18w0azc0cjp2rozjl7qeacez1iymt4y0z4crssfgckhvjgr1ca79vuxp58i94rf0jhl0paaxozdm0o4088ddg2kjlnk1o84kk3597egd9n5kl7862lx2co0suviplf240z32dcmswwgdrwx1pwa9toi1cgxfqrytkaoajdohb9zb060ieq54f673321lqvduuk1v6slrrqqizjh4u3vywrrdvbvfwz1fj8rvoayy18kni8ads8gkee6vvq35i4i2y9h27f736m3r135i7dmh90qpar5789hjxw3n3c7u9srnh1mvx68gwti280u9ydh3ydmlhouher2tbfsuwx064awz95avwjna1q6ic08l16ar9lqtebbvxdmzzj6ie8enxzzec2zvhf9gfwguhny29ouepy9vjvkomom3do6ro8gvlkx6lniqvzskqmpc81tdvbjxpxbp1onneaqfy2j7w2tkv5vudkjziu6yqgw7bk8lpnsftca1fvju162s4pockcevhq3qwzhtpcbzwcaqdteh5a4ulaja1ctmb62yz45n1w06vh6bgsjvp6o768qc7akkh4k69blf2hjibi865ujdgeotc3pv8q00d9caaobxozfzlzvv5t56uxm7ico23wa6pssopo540dg4yf8f3acowamnizcpojicyrx1eckn7k1pcm4w38lm6t76t2gyar7mzpymm1muevxpbffmcp839m00hg6lzdj50jb9nc6jen4trfnuh7xzvl0bor94hjmib4afehogn9sv7qmwfb8zd5lksb64hucr0zj01c8947d4zj5fovvd69idgydzs06jgo4gvs39d9jcr9e3lund1jtxr3z45jzbfvulewpdwlm90huxjb15m55jitgfzmox83cv70pq8mb07dfslx9lg6a83falxwwnc0spz204vfoq3y6msfc05rahphsh1rbx6l3665d50hbxr5b90fim0j48n5omv3yyz7oqjqmn7ute9iscju98sa95f1jhtzoas9n030zqzfejqwxzg47fpbyv6ni8f3svl37ltuwexlcnebukdotxelb570jlwgrpk2djz2mu484rqqix1t2b81oyg5l11ixymkv760ti28f22d97og7i3o60nrpfg3qumdkldsp7d3coltwjtnz57w8spmikmyiueqmw47idmt3sizmirg9zwfyepwbojc70befzd0rzjy4540ruwh3g5nhhb51mlo1cuenfgowiiisjp40xitrtmlg2crhdlenrndw92k4nf2b9tp6sfya35b5ba816sf6rd53ugo7a2p9adx0uxz7gwenf02tlhaqrug5vde49byjldx6nxvweeq3twkmo70ux9c1gf32izlkhk4dpsyunb6akfu7l53t36cqijy75fy9nka5w0di08udlo5slqn7ghg9z69jrw2ca5inmv4boy0zv4c873fz73fbg2gn7nlr9fu0qxxyjcftcl5lux3c2icst9fcrubjqk2n3wp2avjfkn2egk4l5utp3nm0jhgf9lw8wo6afezlnidm6vz3dmviokzaxw1l8t4j4enbbftrt748mgrmzzwwyb5aai1ilrhgr4ywpimqtc4msybx0r1b1r0tcu8phiec6ia1j1220sowlluejs7obbckhalkvoxyoean24v3vla6rv41yrhqj57602v5z87fb27xx52djnrmqx9rbopk2jwdiqi1wautqpycob7am7ltegwozkzdkyds6upjgi7aiiw0l32t3vqz5d4vjeabjpi9srcv40p5t52l0wcyx006v11gfz7xattvwqr07bb87egsfys6dy7r0zd8ypuv5dzbfr7rb0mk 00:07:26.292 18:24:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:26.292 18:24:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:26.292 18:24:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:26.292 18:24:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:26.292 [2024-12-08 18:24:44.170294] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:26.292 [2024-12-08 18:24:44.170392] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72061 ] 00:07:26.292 { 00:07:26.292 "subsystems": [ 00:07:26.292 { 00:07:26.292 "subsystem": "bdev", 00:07:26.292 "config": [ 00:07:26.292 { 00:07:26.292 "params": { 00:07:26.292 "trtype": "pcie", 00:07:26.292 "traddr": "0000:00:10.0", 00:07:26.292 "name": "Nvme0" 00:07:26.292 }, 00:07:26.292 "method": "bdev_nvme_attach_controller" 00:07:26.292 }, 00:07:26.292 { 00:07:26.292 "method": "bdev_wait_for_examine" 00:07:26.292 } 00:07:26.292 ] 00:07:26.292 } 00:07:26.292 ] 00:07:26.292 } 00:07:26.551 [2024-12-08 18:24:44.307548] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.551 [2024-12-08 18:24:44.360091] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.551 [2024-12-08 18:24:44.410781] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.810  [2024-12-08T18:24:44.740Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:26.810 00:07:26.810 18:24:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:26.810 18:24:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:26.810 18:24:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:26.810 18:24:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:27.069 [2024-12-08 18:24:44.753473] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:27.069 [2024-12-08 18:24:44.753570] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72080 ] 00:07:27.069 { 00:07:27.069 "subsystems": [ 00:07:27.069 { 00:07:27.069 "subsystem": "bdev", 00:07:27.069 "config": [ 00:07:27.069 { 00:07:27.069 "params": { 00:07:27.069 "trtype": "pcie", 00:07:27.069 "traddr": "0000:00:10.0", 00:07:27.069 "name": "Nvme0" 00:07:27.069 }, 00:07:27.069 "method": "bdev_nvme_attach_controller" 00:07:27.069 }, 00:07:27.069 { 00:07:27.069 "method": "bdev_wait_for_examine" 00:07:27.069 } 00:07:27.069 ] 00:07:27.069 } 00:07:27.069 ] 00:07:27.069 } 00:07:27.069 [2024-12-08 18:24:44.884792] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.069 [2024-12-08 18:24:44.937067] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.069 [2024-12-08 18:24:44.986161] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.327  [2024-12-08T18:24:45.516Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:27.586 00:07:27.586 18:24:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:27.586 ************************************ 00:07:27.586 END TEST dd_rw_offset 00:07:27.586 ************************************ 00:07:27.587 18:24:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 2kyqaz3vih8efuqx0r8cixwypdmnryw45tqjymiqst9hpzxibecnhtyzwkxl3kdazjnew387v7eg86oj5zq71clcau641cd7vcxkoxxhmm2ry5wyoqsgo4o97ujtsgyz12ovhof1vhl1gt04u7afil2xofcqvi2c6rnwjxrvxjxqm9fz7mtzwdlxzvdoyrq5miqa3akjxvdlqbmhvf9ih7b7ci5uw194u8kgw7c9dwt0hn0zov0mmiqs9s4vqn4t4veh07sg1tispml8u229y564ijyyimraah3nfjmsm5eextgf778l4xdq4mjufvmzp16rly5fm0sg3svjcojh1glqk8kni1ivdju5ujkw1u2m5cqsv5glr3g0ekoxgi1eki8b3b2jp69v4v9dh4hcwid8d0a1iqt4e71g8p82apkytcxfm8ck2jexwgovo7zx5q7rj22tfy329u2591aud19tg5yc3zs28hzvw3lpmcnkw5pj1pf88474zz8utfj8xwiog367rpvo9zsi6rr7mpbs44z3ono7nth8xtm73pfawstcb8ynf97f6rwll79upr7ku30m8stjq324le51jc2oysqpndns5hvq0lnlnd7j9kv3w4yqypuevb1yqi4jjjae6axwghsxspjeks74p14xy6i7ifacysc96ti70uh52iwcqdskdxbfomnf90ehvfzwp818fi95h9vufbr9z564chfvcwpls7o0swf4filci3vwfan8yz86ne1bg7pscb0bzsw308g9uwurb952h7qzjh1tfyb60xcf2w75ik4j532l309iwycoh7wh4pi8qy7d8k86ei2b8iw42auwstdxtjqs7rnu9m8xpto83kb2scs10rqtrzqadv2k9qqfgi74mdgp4theagahclu3fvrfljqdshpcbtsdqj76tpk5wtcm6p1vqbs0d13vf7rjxe3gv3qki6e3p0z8o0ug1v8jo1ywn976914muzjr1eh7o2q8cgws3wtkh5gcafka1x2a0wta40zq9ka4kxliz6l90wkw2intqnb597ypo961mm1fzd4kwu45m5j8e8b27bv8m859y5tcafi9840htitrxdck2ic6dg275nk9mblexzjs3rl1aup3sk4zdjpjxtlq30ydzq8tmd4dey5vej8detu483pp0hgjkfc5m1ffeyn4ncswhc4eq4etpntlkjvmkopaeude7toogky4e5m5kkzgrl7g667tekaj4z3z5ydd3tm86amblt8jnvfwznyeh1py1l3mnef3j1ajowycsqrn7pkysz7g9divydemgv4ibokz9ghau8crwescl7g9ys1qm5gyz86r7liri0u8iuptjt9d9g5vrhisv1iklnsefw6ohv59p5zznmj6ufo17y0987ep7xjy00lvdvc86clb4qr3fk13sb5bmskryxadx4zjahc1v9rdl3gxx8jkyfuynv0zn09qiaxyjtsy304kho9ve36b32l3j9d9tjlukux41xs77wmnu41uc69cib48164i71jedq5zxk1uq9n66oqd8zt3bx2tag8flk7etx632l0zryogpghxhjxk2woij0phh0spa8z4io9t291jcsr8e7l5j0lzr27yy2cn160ohf5nj8z1sqb3ak8pgr3o2xo592wpxzei1wlxuf2w7owwqpvxot1gi4csavxfvh6f0zr8w7kn88j0kgv934q8iy7o6mkiju2ek4pv8swfu9b8wumca79gcr9rw99ttfeunoj16ys27wh0s0z4des4s09z2pdpisn0krf6ggnxjvy4s75wgo27k7fal5isw00cb2nccf1g7ee918dn3xmyebix51t0gfn5plno5yhhktjxebi75jeqeq7jixf8f3p6xicm8wr89olli7usu0naagiq9x5hqq7iwtr5hc90vbcwsu178s39vn2trot5pbbt7z2qeu37djdke38v7idh27cssq3qs2bjzvefb9ewtch9bzrtdk9tlv0ftv8rnw6m8rkzwei30da1lsj24209lyu3uuv7m188izu3jk2scc6iwpv4q1hyyyi1qnd2z1rdevam5zn6ifxftz0z3j0ekamv32x0nqtsqffcvwdol7wlcyqawl82ldplm7mkfbj02y2dxxzjpm7t9xwcn2l61hnr86q6vfi4469uclqe3xsq9dzvq74gcu0ki18w0azc0cjp2rozjl7qeacez1iymt4y0z4crssfgckhvjgr1ca79vuxp58i94rf0jhl0paaxozdm0o4088ddg2kjlnk1o84kk3597egd9n5kl7862lx2co0suviplf240z32dcmswwgdrwx1pwa9toi1cgxfqrytkaoajdohb9zb060ieq54f673321lqvduuk1v6slrrqqizjh4u3vywrrdvbvfwz1fj8rvoayy18kni8ads8gkee6vvq35i4i2y9h27f736m3r135i7dmh90qpar5789hjxw3n3c7u9srnh1mvx68gwti280u9ydh3ydmlhouher2tbfsuwx064awz95avwjna1q6ic08l16ar9lqtebbvxdmzzj6ie8enxzzec2zvhf9gfwguhny29ouepy9vjvkomom3do6ro8gvlkx6lniqvzskqmpc81tdvbjxpxbp1onneaqfy2j7w2tkv5vudkjziu6yqgw7bk8lpnsftca1fvju162s4pockcevhq3qwzhtpcbzwcaqdteh5a4ulaja1ctmb62yz45n1w06vh6bgsjvp6o768qc7akkh4k69blf2hjibi865ujdgeotc3pv8q00d9caaobxozfzlzvv5t56uxm7ico23wa6pssopo540dg4yf8f3acowamnizcpojicyrx1eckn7k1pcm4w38lm6t76t2gyar7mzpymm1muevxpbffmcp839m00hg6lzdj50jb9nc6jen4trfnuh7xzvl0bor94hjmib4afehogn9sv7qmwfb8zd5lksb64hucr0zj01c8947d4zj5fovvd69idgydzs06jgo4gvs39d9jcr9e3lund1jtxr3z45jzbfvulewpdwlm90huxjb15m55jitgfzmox83cv70pq8mb07dfslx9lg6a83falxwwnc0spz204vfoq3y6msfc05rahphsh1rbx6l3665d50hbxr5b90fim0j48n5omv3yyz7oqjqmn7ute9iscju98sa95f1jhtzoas9n030zqzfejqwxzg47fpbyv6ni8f3svl37ltuwexlcnebukdotxelb570jlwgrpk2djz2mu484rqqix1t2b81oyg5l11ixymkv760ti28f22d97og7i3o60nrpfg3qumdkldsp7d3coltwjtnz57w8spmikmyiueqmw47idmt3sizmirg9zwfyepwbojc70befzd0rzjy4540ruwh3g5nhhb51mlo1cuenfgowiiisjp40xitrtmlg2crhdlenrndw92k4nf2b9tp6sfya35b5ba816sf6rd53ugo7a2p9adx0uxz7gwenf02tlhaqrug5vde49byjldx6nxvweeq3twkmo70ux9c1gf32izlkhk4dpsyunb6akfu7l53t36cqijy75fy9nka5w0di08udlo5slqn7ghg9z69jrw2ca5inmv4boy0zv4c873fz73fbg2gn7nlr9fu0qxxyjcftcl5lux3c2icst9fcrubjqk2n3wp2avjfkn2egk4l5utp3nm0jhgf9lw8wo6afezlnidm6vz3dmviokzaxw1l8t4j4enbbftrt748mgrmzzwwyb5aai1ilrhgr4ywpimqtc4msybx0r1b1r0tcu8phiec6ia1j1220sowlluejs7obbckhalkvoxyoean24v3vla6rv41yrhqj57602v5z87fb27xx52djnrmqx9rbopk2jwdiqi1wautqpycob7am7ltegwozkzdkyds6upjgi7aiiw0l32t3vqz5d4vjeabjpi9srcv40p5t52l0wcyx006v11gfz7xattvwqr07bb87egsfys6dy7r0zd8ypuv5dzbfr7rb0mk == \2\k\y\q\a\z\3\v\i\h\8\e\f\u\q\x\0\r\8\c\i\x\w\y\p\d\m\n\r\y\w\4\5\t\q\j\y\m\i\q\s\t\9\h\p\z\x\i\b\e\c\n\h\t\y\z\w\k\x\l\3\k\d\a\z\j\n\e\w\3\8\7\v\7\e\g\8\6\o\j\5\z\q\7\1\c\l\c\a\u\6\4\1\c\d\7\v\c\x\k\o\x\x\h\m\m\2\r\y\5\w\y\o\q\s\g\o\4\o\9\7\u\j\t\s\g\y\z\1\2\o\v\h\o\f\1\v\h\l\1\g\t\0\4\u\7\a\f\i\l\2\x\o\f\c\q\v\i\2\c\6\r\n\w\j\x\r\v\x\j\x\q\m\9\f\z\7\m\t\z\w\d\l\x\z\v\d\o\y\r\q\5\m\i\q\a\3\a\k\j\x\v\d\l\q\b\m\h\v\f\9\i\h\7\b\7\c\i\5\u\w\1\9\4\u\8\k\g\w\7\c\9\d\w\t\0\h\n\0\z\o\v\0\m\m\i\q\s\9\s\4\v\q\n\4\t\4\v\e\h\0\7\s\g\1\t\i\s\p\m\l\8\u\2\2\9\y\5\6\4\i\j\y\y\i\m\r\a\a\h\3\n\f\j\m\s\m\5\e\e\x\t\g\f\7\7\8\l\4\x\d\q\4\m\j\u\f\v\m\z\p\1\6\r\l\y\5\f\m\0\s\g\3\s\v\j\c\o\j\h\1\g\l\q\k\8\k\n\i\1\i\v\d\j\u\5\u\j\k\w\1\u\2\m\5\c\q\s\v\5\g\l\r\3\g\0\e\k\o\x\g\i\1\e\k\i\8\b\3\b\2\j\p\6\9\v\4\v\9\d\h\4\h\c\w\i\d\8\d\0\a\1\i\q\t\4\e\7\1\g\8\p\8\2\a\p\k\y\t\c\x\f\m\8\c\k\2\j\e\x\w\g\o\v\o\7\z\x\5\q\7\r\j\2\2\t\f\y\3\2\9\u\2\5\9\1\a\u\d\1\9\t\g\5\y\c\3\z\s\2\8\h\z\v\w\3\l\p\m\c\n\k\w\5\p\j\1\p\f\8\8\4\7\4\z\z\8\u\t\f\j\8\x\w\i\o\g\3\6\7\r\p\v\o\9\z\s\i\6\r\r\7\m\p\b\s\4\4\z\3\o\n\o\7\n\t\h\8\x\t\m\7\3\p\f\a\w\s\t\c\b\8\y\n\f\9\7\f\6\r\w\l\l\7\9\u\p\r\7\k\u\3\0\m\8\s\t\j\q\3\2\4\l\e\5\1\j\c\2\o\y\s\q\p\n\d\n\s\5\h\v\q\0\l\n\l\n\d\7\j\9\k\v\3\w\4\y\q\y\p\u\e\v\b\1\y\q\i\4\j\j\j\a\e\6\a\x\w\g\h\s\x\s\p\j\e\k\s\7\4\p\1\4\x\y\6\i\7\i\f\a\c\y\s\c\9\6\t\i\7\0\u\h\5\2\i\w\c\q\d\s\k\d\x\b\f\o\m\n\f\9\0\e\h\v\f\z\w\p\8\1\8\f\i\9\5\h\9\v\u\f\b\r\9\z\5\6\4\c\h\f\v\c\w\p\l\s\7\o\0\s\w\f\4\f\i\l\c\i\3\v\w\f\a\n\8\y\z\8\6\n\e\1\b\g\7\p\s\c\b\0\b\z\s\w\3\0\8\g\9\u\w\u\r\b\9\5\2\h\7\q\z\j\h\1\t\f\y\b\6\0\x\c\f\2\w\7\5\i\k\4\j\5\3\2\l\3\0\9\i\w\y\c\o\h\7\w\h\4\p\i\8\q\y\7\d\8\k\8\6\e\i\2\b\8\i\w\4\2\a\u\w\s\t\d\x\t\j\q\s\7\r\n\u\9\m\8\x\p\t\o\8\3\k\b\2\s\c\s\1\0\r\q\t\r\z\q\a\d\v\2\k\9\q\q\f\g\i\7\4\m\d\g\p\4\t\h\e\a\g\a\h\c\l\u\3\f\v\r\f\l\j\q\d\s\h\p\c\b\t\s\d\q\j\7\6\t\p\k\5\w\t\c\m\6\p\1\v\q\b\s\0\d\1\3\v\f\7\r\j\x\e\3\g\v\3\q\k\i\6\e\3\p\0\z\8\o\0\u\g\1\v\8\j\o\1\y\w\n\9\7\6\9\1\4\m\u\z\j\r\1\e\h\7\o\2\q\8\c\g\w\s\3\w\t\k\h\5\g\c\a\f\k\a\1\x\2\a\0\w\t\a\4\0\z\q\9\k\a\4\k\x\l\i\z\6\l\9\0\w\k\w\2\i\n\t\q\n\b\5\9\7\y\p\o\9\6\1\m\m\1\f\z\d\4\k\w\u\4\5\m\5\j\8\e\8\b\2\7\b\v\8\m\8\5\9\y\5\t\c\a\f\i\9\8\4\0\h\t\i\t\r\x\d\c\k\2\i\c\6\d\g\2\7\5\n\k\9\m\b\l\e\x\z\j\s\3\r\l\1\a\u\p\3\s\k\4\z\d\j\p\j\x\t\l\q\3\0\y\d\z\q\8\t\m\d\4\d\e\y\5\v\e\j\8\d\e\t\u\4\8\3\p\p\0\h\g\j\k\f\c\5\m\1\f\f\e\y\n\4\n\c\s\w\h\c\4\e\q\4\e\t\p\n\t\l\k\j\v\m\k\o\p\a\e\u\d\e\7\t\o\o\g\k\y\4\e\5\m\5\k\k\z\g\r\l\7\g\6\6\7\t\e\k\a\j\4\z\3\z\5\y\d\d\3\t\m\8\6\a\m\b\l\t\8\j\n\v\f\w\z\n\y\e\h\1\p\y\1\l\3\m\n\e\f\3\j\1\a\j\o\w\y\c\s\q\r\n\7\p\k\y\s\z\7\g\9\d\i\v\y\d\e\m\g\v\4\i\b\o\k\z\9\g\h\a\u\8\c\r\w\e\s\c\l\7\g\9\y\s\1\q\m\5\g\y\z\8\6\r\7\l\i\r\i\0\u\8\i\u\p\t\j\t\9\d\9\g\5\v\r\h\i\s\v\1\i\k\l\n\s\e\f\w\6\o\h\v\5\9\p\5\z\z\n\m\j\6\u\f\o\1\7\y\0\9\8\7\e\p\7\x\j\y\0\0\l\v\d\v\c\8\6\c\l\b\4\q\r\3\f\k\1\3\s\b\5\b\m\s\k\r\y\x\a\d\x\4\z\j\a\h\c\1\v\9\r\d\l\3\g\x\x\8\j\k\y\f\u\y\n\v\0\z\n\0\9\q\i\a\x\y\j\t\s\y\3\0\4\k\h\o\9\v\e\3\6\b\3\2\l\3\j\9\d\9\t\j\l\u\k\u\x\4\1\x\s\7\7\w\m\n\u\4\1\u\c\6\9\c\i\b\4\8\1\6\4\i\7\1\j\e\d\q\5\z\x\k\1\u\q\9\n\6\6\o\q\d\8\z\t\3\b\x\2\t\a\g\8\f\l\k\7\e\t\x\6\3\2\l\0\z\r\y\o\g\p\g\h\x\h\j\x\k\2\w\o\i\j\0\p\h\h\0\s\p\a\8\z\4\i\o\9\t\2\9\1\j\c\s\r\8\e\7\l\5\j\0\l\z\r\2\7\y\y\2\c\n\1\6\0\o\h\f\5\n\j\8\z\1\s\q\b\3\a\k\8\p\g\r\3\o\2\x\o\5\9\2\w\p\x\z\e\i\1\w\l\x\u\f\2\w\7\o\w\w\q\p\v\x\o\t\1\g\i\4\c\s\a\v\x\f\v\h\6\f\0\z\r\8\w\7\k\n\8\8\j\0\k\g\v\9\3\4\q\8\i\y\7\o\6\m\k\i\j\u\2\e\k\4\p\v\8\s\w\f\u\9\b\8\w\u\m\c\a\7\9\g\c\r\9\r\w\9\9\t\t\f\e\u\n\o\j\1\6\y\s\2\7\w\h\0\s\0\z\4\d\e\s\4\s\0\9\z\2\p\d\p\i\s\n\0\k\r\f\6\g\g\n\x\j\v\y\4\s\7\5\w\g\o\2\7\k\7\f\a\l\5\i\s\w\0\0\c\b\2\n\c\c\f\1\g\7\e\e\9\1\8\d\n\3\x\m\y\e\b\i\x\5\1\t\0\g\f\n\5\p\l\n\o\5\y\h\h\k\t\j\x\e\b\i\7\5\j\e\q\e\q\7\j\i\x\f\8\f\3\p\6\x\i\c\m\8\w\r\8\9\o\l\l\i\7\u\s\u\0\n\a\a\g\i\q\9\x\5\h\q\q\7\i\w\t\r\5\h\c\9\0\v\b\c\w\s\u\1\7\8\s\3\9\v\n\2\t\r\o\t\5\p\b\b\t\7\z\2\q\e\u\3\7\d\j\d\k\e\3\8\v\7\i\d\h\2\7\c\s\s\q\3\q\s\2\b\j\z\v\e\f\b\9\e\w\t\c\h\9\b\z\r\t\d\k\9\t\l\v\0\f\t\v\8\r\n\w\6\m\8\r\k\z\w\e\i\3\0\d\a\1\l\s\j\2\4\2\0\9\l\y\u\3\u\u\v\7\m\1\8\8\i\z\u\3\j\k\2\s\c\c\6\i\w\p\v\4\q\1\h\y\y\y\i\1\q\n\d\2\z\1\r\d\e\v\a\m\5\z\n\6\i\f\x\f\t\z\0\z\3\j\0\e\k\a\m\v\3\2\x\0\n\q\t\s\q\f\f\c\v\w\d\o\l\7\w\l\c\y\q\a\w\l\8\2\l\d\p\l\m\7\m\k\f\b\j\0\2\y\2\d\x\x\z\j\p\m\7\t\9\x\w\c\n\2\l\6\1\h\n\r\8\6\q\6\v\f\i\4\4\6\9\u\c\l\q\e\3\x\s\q\9\d\z\v\q\7\4\g\c\u\0\k\i\1\8\w\0\a\z\c\0\c\j\p\2\r\o\z\j\l\7\q\e\a\c\e\z\1\i\y\m\t\4\y\0\z\4\c\r\s\s\f\g\c\k\h\v\j\g\r\1\c\a\7\9\v\u\x\p\5\8\i\9\4\r\f\0\j\h\l\0\p\a\a\x\o\z\d\m\0\o\4\0\8\8\d\d\g\2\k\j\l\n\k\1\o\8\4\k\k\3\5\9\7\e\g\d\9\n\5\k\l\7\8\6\2\l\x\2\c\o\0\s\u\v\i\p\l\f\2\4\0\z\3\2\d\c\m\s\w\w\g\d\r\w\x\1\p\w\a\9\t\o\i\1\c\g\x\f\q\r\y\t\k\a\o\a\j\d\o\h\b\9\z\b\0\6\0\i\e\q\5\4\f\6\7\3\3\2\1\l\q\v\d\u\u\k\1\v\6\s\l\r\r\q\q\i\z\j\h\4\u\3\v\y\w\r\r\d\v\b\v\f\w\z\1\f\j\8\r\v\o\a\y\y\1\8\k\n\i\8\a\d\s\8\g\k\e\e\6\v\v\q\3\5\i\4\i\2\y\9\h\2\7\f\7\3\6\m\3\r\1\3\5\i\7\d\m\h\9\0\q\p\a\r\5\7\8\9\h\j\x\w\3\n\3\c\7\u\9\s\r\n\h\1\m\v\x\6\8\g\w\t\i\2\8\0\u\9\y\d\h\3\y\d\m\l\h\o\u\h\e\r\2\t\b\f\s\u\w\x\0\6\4\a\w\z\9\5\a\v\w\j\n\a\1\q\6\i\c\0\8\l\1\6\a\r\9\l\q\t\e\b\b\v\x\d\m\z\z\j\6\i\e\8\e\n\x\z\z\e\c\2\z\v\h\f\9\g\f\w\g\u\h\n\y\2\9\o\u\e\p\y\9\v\j\v\k\o\m\o\m\3\d\o\6\r\o\8\g\v\l\k\x\6\l\n\i\q\v\z\s\k\q\m\p\c\8\1\t\d\v\b\j\x\p\x\b\p\1\o\n\n\e\a\q\f\y\2\j\7\w\2\t\k\v\5\v\u\d\k\j\z\i\u\6\y\q\g\w\7\b\k\8\l\p\n\s\f\t\c\a\1\f\v\j\u\1\6\2\s\4\p\o\c\k\c\e\v\h\q\3\q\w\z\h\t\p\c\b\z\w\c\a\q\d\t\e\h\5\a\4\u\l\a\j\a\1\c\t\m\b\6\2\y\z\4\5\n\1\w\0\6\v\h\6\b\g\s\j\v\p\6\o\7\6\8\q\c\7\a\k\k\h\4\k\6\9\b\l\f\2\h\j\i\b\i\8\6\5\u\j\d\g\e\o\t\c\3\p\v\8\q\0\0\d\9\c\a\a\o\b\x\o\z\f\z\l\z\v\v\5\t\5\6\u\x\m\7\i\c\o\2\3\w\a\6\p\s\s\o\p\o\5\4\0\d\g\4\y\f\8\f\3\a\c\o\w\a\m\n\i\z\c\p\o\j\i\c\y\r\x\1\e\c\k\n\7\k\1\p\c\m\4\w\3\8\l\m\6\t\7\6\t\2\g\y\a\r\7\m\z\p\y\m\m\1\m\u\e\v\x\p\b\f\f\m\c\p\8\3\9\m\0\0\h\g\6\l\z\d\j\5\0\j\b\9\n\c\6\j\e\n\4\t\r\f\n\u\h\7\x\z\v\l\0\b\o\r\9\4\h\j\m\i\b\4\a\f\e\h\o\g\n\9\s\v\7\q\m\w\f\b\8\z\d\5\l\k\s\b\6\4\h\u\c\r\0\z\j\0\1\c\8\9\4\7\d\4\z\j\5\f\o\v\v\d\6\9\i\d\g\y\d\z\s\0\6\j\g\o\4\g\v\s\3\9\d\9\j\c\r\9\e\3\l\u\n\d\1\j\t\x\r\3\z\4\5\j\z\b\f\v\u\l\e\w\p\d\w\l\m\9\0\h\u\x\j\b\1\5\m\5\5\j\i\t\g\f\z\m\o\x\8\3\c\v\7\0\p\q\8\m\b\0\7\d\f\s\l\x\9\l\g\6\a\8\3\f\a\l\x\w\w\n\c\0\s\p\z\2\0\4\v\f\o\q\3\y\6\m\s\f\c\0\5\r\a\h\p\h\s\h\1\r\b\x\6\l\3\6\6\5\d\5\0\h\b\x\r\5\b\9\0\f\i\m\0\j\4\8\n\5\o\m\v\3\y\y\z\7\o\q\j\q\m\n\7\u\t\e\9\i\s\c\j\u\9\8\s\a\9\5\f\1\j\h\t\z\o\a\s\9\n\0\3\0\z\q\z\f\e\j\q\w\x\z\g\4\7\f\p\b\y\v\6\n\i\8\f\3\s\v\l\3\7\l\t\u\w\e\x\l\c\n\e\b\u\k\d\o\t\x\e\l\b\5\7\0\j\l\w\g\r\p\k\2\d\j\z\2\m\u\4\8\4\r\q\q\i\x\1\t\2\b\8\1\o\y\g\5\l\1\1\i\x\y\m\k\v\7\6\0\t\i\2\8\f\2\2\d\9\7\o\g\7\i\3\o\6\0\n\r\p\f\g\3\q\u\m\d\k\l\d\s\p\7\d\3\c\o\l\t\w\j\t\n\z\5\7\w\8\s\p\m\i\k\m\y\i\u\e\q\m\w\4\7\i\d\m\t\3\s\i\z\m\i\r\g\9\z\w\f\y\e\p\w\b\o\j\c\7\0\b\e\f\z\d\0\r\z\j\y\4\5\4\0\r\u\w\h\3\g\5\n\h\h\b\5\1\m\l\o\1\c\u\e\n\f\g\o\w\i\i\i\s\j\p\4\0\x\i\t\r\t\m\l\g\2\c\r\h\d\l\e\n\r\n\d\w\9\2\k\4\n\f\2\b\9\t\p\6\s\f\y\a\3\5\b\5\b\a\8\1\6\s\f\6\r\d\5\3\u\g\o\7\a\2\p\9\a\d\x\0\u\x\z\7\g\w\e\n\f\0\2\t\l\h\a\q\r\u\g\5\v\d\e\4\9\b\y\j\l\d\x\6\n\x\v\w\e\e\q\3\t\w\k\m\o\7\0\u\x\9\c\1\g\f\3\2\i\z\l\k\h\k\4\d\p\s\y\u\n\b\6\a\k\f\u\7\l\5\3\t\3\6\c\q\i\j\y\7\5\f\y\9\n\k\a\5\w\0\d\i\0\8\u\d\l\o\5\s\l\q\n\7\g\h\g\9\z\6\9\j\r\w\2\c\a\5\i\n\m\v\4\b\o\y\0\z\v\4\c\8\7\3\f\z\7\3\f\b\g\2\g\n\7\n\l\r\9\f\u\0\q\x\x\y\j\c\f\t\c\l\5\l\u\x\3\c\2\i\c\s\t\9\f\c\r\u\b\j\q\k\2\n\3\w\p\2\a\v\j\f\k\n\2\e\g\k\4\l\5\u\t\p\3\n\m\0\j\h\g\f\9\l\w\8\w\o\6\a\f\e\z\l\n\i\d\m\6\v\z\3\d\m\v\i\o\k\z\a\x\w\1\l\8\t\4\j\4\e\n\b\b\f\t\r\t\7\4\8\m\g\r\m\z\z\w\w\y\b\5\a\a\i\1\i\l\r\h\g\r\4\y\w\p\i\m\q\t\c\4\m\s\y\b\x\0\r\1\b\1\r\0\t\c\u\8\p\h\i\e\c\6\i\a\1\j\1\2\2\0\s\o\w\l\l\u\e\j\s\7\o\b\b\c\k\h\a\l\k\v\o\x\y\o\e\a\n\2\4\v\3\v\l\a\6\r\v\4\1\y\r\h\q\j\5\7\6\0\2\v\5\z\8\7\f\b\2\7\x\x\5\2\d\j\n\r\m\q\x\9\r\b\o\p\k\2\j\w\d\i\q\i\1\w\a\u\t\q\p\y\c\o\b\7\a\m\7\l\t\e\g\w\o\z\k\z\d\k\y\d\s\6\u\p\j\g\i\7\a\i\i\w\0\l\3\2\t\3\v\q\z\5\d\4\v\j\e\a\b\j\p\i\9\s\r\c\v\4\0\p\5\t\5\2\l\0\w\c\y\x\0\0\6\v\1\1\g\f\z\7\x\a\t\t\v\w\q\r\0\7\b\b\8\7\e\g\s\f\y\s\6\d\y\7\r\0\z\d\8\y\p\u\v\5\d\z\b\f\r\7\r\b\0\m\k ]] 00:07:27.587 00:07:27.587 real 0m1.202s 00:07:27.587 user 0m0.788s 00:07:27.587 sys 0m0.575s 00:07:27.587 18:24:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.587 18:24:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:27.587 18:24:45 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:27.587 18:24:45 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:27.587 18:24:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:27.587 18:24:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:27.587 18:24:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:27.587 18:24:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:27.587 18:24:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:27.587 18:24:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:27.587 18:24:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:27.587 18:24:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:27.587 18:24:45 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:27.587 [2024-12-08 18:24:45.366016] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:27.587 [2024-12-08 18:24:45.366254] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72109 ] 00:07:27.587 { 00:07:27.587 "subsystems": [ 00:07:27.587 { 00:07:27.587 "subsystem": "bdev", 00:07:27.587 "config": [ 00:07:27.587 { 00:07:27.587 "params": { 00:07:27.587 "trtype": "pcie", 00:07:27.587 "traddr": "0000:00:10.0", 00:07:27.587 "name": "Nvme0" 00:07:27.587 }, 00:07:27.587 "method": "bdev_nvme_attach_controller" 00:07:27.587 }, 00:07:27.587 { 00:07:27.587 "method": "bdev_wait_for_examine" 00:07:27.587 } 00:07:27.587 ] 00:07:27.587 } 00:07:27.587 ] 00:07:27.587 } 00:07:27.587 [2024-12-08 18:24:45.503171] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.845 [2024-12-08 18:24:45.562892] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.845 [2024-12-08 18:24:45.616479] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.845  [2024-12-08T18:24:46.033Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:28.103 00:07:28.103 18:24:45 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:28.103 ************************************ 00:07:28.103 END TEST spdk_dd_basic_rw 00:07:28.103 ************************************ 00:07:28.104 00:07:28.104 real 0m16.931s 00:07:28.104 user 0m11.779s 00:07:28.104 sys 0m6.625s 00:07:28.104 18:24:45 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.104 18:24:45 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:28.104 18:24:45 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:28.104 18:24:45 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.104 18:24:45 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.104 18:24:45 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:28.104 ************************************ 00:07:28.104 START TEST spdk_dd_posix 00:07:28.104 ************************************ 00:07:28.104 18:24:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:28.364 * Looking for test storage... 00:07:28.364 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lcov --version 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:28.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.364 --rc genhtml_branch_coverage=1 00:07:28.364 --rc genhtml_function_coverage=1 00:07:28.364 --rc genhtml_legend=1 00:07:28.364 --rc geninfo_all_blocks=1 00:07:28.364 --rc geninfo_unexecuted_blocks=1 00:07:28.364 00:07:28.364 ' 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:28.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.364 --rc genhtml_branch_coverage=1 00:07:28.364 --rc genhtml_function_coverage=1 00:07:28.364 --rc genhtml_legend=1 00:07:28.364 --rc geninfo_all_blocks=1 00:07:28.364 --rc geninfo_unexecuted_blocks=1 00:07:28.364 00:07:28.364 ' 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:28.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.364 --rc genhtml_branch_coverage=1 00:07:28.364 --rc genhtml_function_coverage=1 00:07:28.364 --rc genhtml_legend=1 00:07:28.364 --rc geninfo_all_blocks=1 00:07:28.364 --rc geninfo_unexecuted_blocks=1 00:07:28.364 00:07:28.364 ' 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:28.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.364 --rc genhtml_branch_coverage=1 00:07:28.364 --rc genhtml_function_coverage=1 00:07:28.364 --rc genhtml_legend=1 00:07:28.364 --rc geninfo_all_blocks=1 00:07:28.364 --rc geninfo_unexecuted_blocks=1 00:07:28.364 00:07:28.364 ' 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:28.364 * First test run, liburing in use 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:28.364 ************************************ 00:07:28.364 START TEST dd_flag_append 00:07:28.364 ************************************ 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=hitd8yu39w6ts20k1hjwfzgr125si0n5 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=ck0jwgpfgbj7aps3xck4ryivzctcc6lo 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s hitd8yu39w6ts20k1hjwfzgr125si0n5 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s ck0jwgpfgbj7aps3xck4ryivzctcc6lo 00:07:28.364 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:28.364 [2024-12-08 18:24:46.236146] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:28.364 [2024-12-08 18:24:46.236246] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72176 ] 00:07:28.624 [2024-12-08 18:24:46.370726] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.624 [2024-12-08 18:24:46.425666] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.624 [2024-12-08 18:24:46.478407] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.624  [2024-12-08T18:24:46.813Z] Copying: 32/32 [B] (average 31 kBps) 00:07:28.883 00:07:28.883 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ ck0jwgpfgbj7aps3xck4ryivzctcc6lohitd8yu39w6ts20k1hjwfzgr125si0n5 == \c\k\0\j\w\g\p\f\g\b\j\7\a\p\s\3\x\c\k\4\r\y\i\v\z\c\t\c\c\6\l\o\h\i\t\d\8\y\u\3\9\w\6\t\s\2\0\k\1\h\j\w\f\z\g\r\1\2\5\s\i\0\n\5 ]] 00:07:28.883 00:07:28.883 real 0m0.536s 00:07:28.883 user 0m0.280s 00:07:28.883 sys 0m0.276s 00:07:28.883 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.883 ************************************ 00:07:28.883 END TEST dd_flag_append 00:07:28.883 ************************************ 00:07:28.883 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:28.883 18:24:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:28.883 18:24:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.883 18:24:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.883 18:24:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:28.883 ************************************ 00:07:28.883 START TEST dd_flag_directory 00:07:28.883 ************************************ 00:07:28.883 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:07:28.883 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:28.883 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:07:28.883 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:28.883 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.883 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.883 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.883 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.883 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.883 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.883 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.883 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:28.883 18:24:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:29.142 [2024-12-08 18:24:46.820449] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:29.142 [2024-12-08 18:24:46.820545] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72210 ] 00:07:29.142 [2024-12-08 18:24:46.956019] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.142 [2024-12-08 18:24:47.007008] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.142 [2024-12-08 18:24:47.057511] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.402 [2024-12-08 18:24:47.090305] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:29.402 [2024-12-08 18:24:47.090347] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:29.402 [2024-12-08 18:24:47.090360] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.402 [2024-12-08 18:24:47.196119] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:29.402 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:07:29.402 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:29.402 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:07:29.402 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:07:29.402 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:07:29.402 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:29.402 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:29.402 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:07:29.402 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:29.402 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.402 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.402 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.402 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.402 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.402 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.402 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.402 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:29.402 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:29.402 [2024-12-08 18:24:47.311570] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:29.402 [2024-12-08 18:24:47.311643] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72216 ] 00:07:29.662 [2024-12-08 18:24:47.440199] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.662 [2024-12-08 18:24:47.514318] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.662 [2024-12-08 18:24:47.563678] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.923 [2024-12-08 18:24:47.591958] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:29.923 [2024-12-08 18:24:47.592011] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:29.923 [2024-12-08 18:24:47.592025] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.923 [2024-12-08 18:24:47.699682] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:29.923 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:07:29.923 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:29.923 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:07:29.923 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:07:29.923 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:07:29.923 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:29.923 00:07:29.923 real 0m1.020s 00:07:29.923 user 0m0.531s 00:07:29.923 sys 0m0.280s 00:07:29.923 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.923 ************************************ 00:07:29.923 END TEST dd_flag_directory 00:07:29.923 ************************************ 00:07:29.923 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:29.923 18:24:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:29.923 18:24:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:29.923 18:24:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.923 18:24:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:29.923 ************************************ 00:07:29.923 START TEST dd_flag_nofollow 00:07:29.923 ************************************ 00:07:29.923 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:07:29.923 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:29.923 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:29.923 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:29.923 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:29.923 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:29.923 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:07:29.923 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:29.923 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.923 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.923 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.182 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.182 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.182 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.182 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.182 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:30.182 18:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:30.182 [2024-12-08 18:24:47.905073] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:30.182 [2024-12-08 18:24:47.905170] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72250 ] 00:07:30.182 [2024-12-08 18:24:48.039270] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.182 [2024-12-08 18:24:48.089813] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.441 [2024-12-08 18:24:48.138842] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.441 [2024-12-08 18:24:48.167358] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:30.441 [2024-12-08 18:24:48.167777] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:30.441 [2024-12-08 18:24:48.167799] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.441 [2024-12-08 18:24:48.273740] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:30.441 18:24:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:07:30.441 18:24:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:30.441 18:24:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:07:30.441 18:24:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:07:30.441 18:24:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:07:30.441 18:24:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:30.441 18:24:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:30.441 18:24:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:07:30.441 18:24:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:30.441 18:24:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.441 18:24:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.441 18:24:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.441 18:24:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.441 18:24:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.441 18:24:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.441 18:24:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.441 18:24:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:30.441 18:24:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:30.700 [2024-12-08 18:24:48.387483] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:30.700 [2024-12-08 18:24:48.387594] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72254 ] 00:07:30.700 [2024-12-08 18:24:48.518197] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.700 [2024-12-08 18:24:48.573121] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.700 [2024-12-08 18:24:48.622163] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.959 [2024-12-08 18:24:48.650937] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:30.960 [2024-12-08 18:24:48.650992] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:30.960 [2024-12-08 18:24:48.651007] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.960 [2024-12-08 18:24:48.760081] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:30.960 18:24:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:07:30.960 18:24:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:30.960 18:24:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:07:30.960 18:24:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:07:30.960 18:24:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:07:30.960 18:24:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:30.960 18:24:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:30.960 18:24:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:30.960 18:24:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:30.960 18:24:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:31.219 [2024-12-08 18:24:48.905801] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:31.219 [2024-12-08 18:24:48.905912] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72267 ] 00:07:31.219 [2024-12-08 18:24:49.038118] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.219 [2024-12-08 18:24:49.089526] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.219 [2024-12-08 18:24:49.138297] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.479  [2024-12-08T18:24:49.409Z] Copying: 512/512 [B] (average 500 kBps) 00:07:31.479 00:07:31.479 ************************************ 00:07:31.479 END TEST dd_flag_nofollow 00:07:31.479 ************************************ 00:07:31.479 18:24:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ dvl06e6hvrj68zi3n9zukn6ms8njav68w497a4gde3z2dtl9ydxnvwvs918i8k761keqafpp55ltdwt0zvnslyl1waztv63t9iep61jcuwnlg7htlr4essar06ozc7e307jxjy8xvw275f8vwbmwsyps7omi59rx3s9lfo1ugrp31l78rw9owpv2eura16oljrrvg8zv7fwslchzqlc3srb5ybjxh4a8hijqhyql92o26ibwhpbcev39y4fbk2rzm9sj8hqvdnujklc6sin9kfhnpx8xllak8lk7vp776uy5y7ie494uke7ze9xwnc97baav41gjrrt7e8ovh6md4glhfokbpvzynj9q6hbloi1e0otge76ofqfwa7fnyxct7tbjplbfo8lfaop1fhlxk8wpra6q4o05f6j556kpwiveuwkczg72fwxfee6qa0pl5cwu97h22031uice1rwk7dmvgu456u2yldavve7he4mrwc5f8sb60xqd08cxuqda == \d\v\l\0\6\e\6\h\v\r\j\6\8\z\i\3\n\9\z\u\k\n\6\m\s\8\n\j\a\v\6\8\w\4\9\7\a\4\g\d\e\3\z\2\d\t\l\9\y\d\x\n\v\w\v\s\9\1\8\i\8\k\7\6\1\k\e\q\a\f\p\p\5\5\l\t\d\w\t\0\z\v\n\s\l\y\l\1\w\a\z\t\v\6\3\t\9\i\e\p\6\1\j\c\u\w\n\l\g\7\h\t\l\r\4\e\s\s\a\r\0\6\o\z\c\7\e\3\0\7\j\x\j\y\8\x\v\w\2\7\5\f\8\v\w\b\m\w\s\y\p\s\7\o\m\i\5\9\r\x\3\s\9\l\f\o\1\u\g\r\p\3\1\l\7\8\r\w\9\o\w\p\v\2\e\u\r\a\1\6\o\l\j\r\r\v\g\8\z\v\7\f\w\s\l\c\h\z\q\l\c\3\s\r\b\5\y\b\j\x\h\4\a\8\h\i\j\q\h\y\q\l\9\2\o\2\6\i\b\w\h\p\b\c\e\v\3\9\y\4\f\b\k\2\r\z\m\9\s\j\8\h\q\v\d\n\u\j\k\l\c\6\s\i\n\9\k\f\h\n\p\x\8\x\l\l\a\k\8\l\k\7\v\p\7\7\6\u\y\5\y\7\i\e\4\9\4\u\k\e\7\z\e\9\x\w\n\c\9\7\b\a\a\v\4\1\g\j\r\r\t\7\e\8\o\v\h\6\m\d\4\g\l\h\f\o\k\b\p\v\z\y\n\j\9\q\6\h\b\l\o\i\1\e\0\o\t\g\e\7\6\o\f\q\f\w\a\7\f\n\y\x\c\t\7\t\b\j\p\l\b\f\o\8\l\f\a\o\p\1\f\h\l\x\k\8\w\p\r\a\6\q\4\o\0\5\f\6\j\5\5\6\k\p\w\i\v\e\u\w\k\c\z\g\7\2\f\w\x\f\e\e\6\q\a\0\p\l\5\c\w\u\9\7\h\2\2\0\3\1\u\i\c\e\1\r\w\k\7\d\m\v\g\u\4\5\6\u\2\y\l\d\a\v\v\e\7\h\e\4\m\r\w\c\5\f\8\s\b\6\0\x\q\d\0\8\c\x\u\q\d\a ]] 00:07:31.479 00:07:31.479 real 0m1.499s 00:07:31.479 user 0m0.755s 00:07:31.479 sys 0m0.546s 00:07:31.479 18:24:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.479 18:24:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:31.479 18:24:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:31.479 18:24:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:31.479 18:24:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.479 18:24:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:31.479 ************************************ 00:07:31.479 START TEST dd_flag_noatime 00:07:31.479 ************************************ 00:07:31.479 18:24:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:07:31.479 18:24:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:31.479 18:24:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:31.479 18:24:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:31.479 18:24:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:31.479 18:24:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:31.479 18:24:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:31.479 18:24:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1733682289 00:07:31.739 18:24:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:31.739 18:24:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1733682289 00:07:31.739 18:24:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:32.732 18:24:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:32.732 [2024-12-08 18:24:50.468086] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:32.732 [2024-12-08 18:24:50.468199] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72304 ] 00:07:32.732 [2024-12-08 18:24:50.606682] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.017 [2024-12-08 18:24:50.671277] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.017 [2024-12-08 18:24:50.726641] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.017  [2024-12-08T18:24:51.214Z] Copying: 512/512 [B] (average 500 kBps) 00:07:33.284 00:07:33.284 18:24:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:33.284 18:24:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1733682289 )) 00:07:33.284 18:24:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:33.284 18:24:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1733682289 )) 00:07:33.284 18:24:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:33.284 [2024-12-08 18:24:51.016591] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:33.284 [2024-12-08 18:24:51.016691] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72323 ] 00:07:33.284 [2024-12-08 18:24:51.144243] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.284 [2024-12-08 18:24:51.194881] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.544 [2024-12-08 18:24:51.243449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.544  [2024-12-08T18:24:51.474Z] Copying: 512/512 [B] (average 500 kBps) 00:07:33.544 00:07:33.544 18:24:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:33.544 ************************************ 00:07:33.544 END TEST dd_flag_noatime 00:07:33.544 ************************************ 00:07:33.544 18:24:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1733682291 )) 00:07:33.544 00:07:33.544 real 0m2.061s 00:07:33.544 user 0m0.544s 00:07:33.544 sys 0m0.557s 00:07:33.544 18:24:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.544 18:24:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:33.803 18:24:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:33.803 18:24:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:33.803 18:24:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.803 18:24:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:33.803 ************************************ 00:07:33.803 START TEST dd_flags_misc 00:07:33.803 ************************************ 00:07:33.803 18:24:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:07:33.803 18:24:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:33.803 18:24:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:33.803 18:24:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:33.803 18:24:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:33.803 18:24:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:33.803 18:24:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:33.803 18:24:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:33.803 18:24:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:33.803 18:24:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:33.803 [2024-12-08 18:24:51.564013] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:33.803 [2024-12-08 18:24:51.564104] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72346 ] 00:07:33.803 [2024-12-08 18:24:51.696332] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.063 [2024-12-08 18:24:51.749638] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.063 [2024-12-08 18:24:51.803391] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.063  [2024-12-08T18:24:52.252Z] Copying: 512/512 [B] (average 500 kBps) 00:07:34.322 00:07:34.322 18:24:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 5oq4enl4r7j3dkobbi6vbce6et1oyzp3v6zldztxll38zkw5fdye7il3ylzvktv1xnxma4aibubzoased4u8hfgv5h3t7rvz7awtf6v8c7wzp88s57b7egozk32jgil9h59mm5pyvntf7q24csuqxmw34u4ir7og3kbjcv4ng17eq32xw0ml0bp51yjlgaychp4ldyhx9qz7frxrpbn6gwy68b7chcds4r0whlcqi86bmoenlsd8xkmgstzcw920icmwj3zdamo82h8x72a0pwkb7i22jhmp5so32qmtyb0odue5zlg2v00e97ys8zyksvb7ay61yybzdp4pfdqlo8gc84suvqfxpbucv4mtuydxssrf2b2ll37999j6ze0k9vznxzzun2gpi7tvb09ylpwzpk7qzf7tv6cnwc1ezrte0t7pcaw6yyxhs47hvflk31f09yap84sycs2tczty14lze11n2wlclq57fq85s294snxhci7cwynjntga1wu7 == \5\o\q\4\e\n\l\4\r\7\j\3\d\k\o\b\b\i\6\v\b\c\e\6\e\t\1\o\y\z\p\3\v\6\z\l\d\z\t\x\l\l\3\8\z\k\w\5\f\d\y\e\7\i\l\3\y\l\z\v\k\t\v\1\x\n\x\m\a\4\a\i\b\u\b\z\o\a\s\e\d\4\u\8\h\f\g\v\5\h\3\t\7\r\v\z\7\a\w\t\f\6\v\8\c\7\w\z\p\8\8\s\5\7\b\7\e\g\o\z\k\3\2\j\g\i\l\9\h\5\9\m\m\5\p\y\v\n\t\f\7\q\2\4\c\s\u\q\x\m\w\3\4\u\4\i\r\7\o\g\3\k\b\j\c\v\4\n\g\1\7\e\q\3\2\x\w\0\m\l\0\b\p\5\1\y\j\l\g\a\y\c\h\p\4\l\d\y\h\x\9\q\z\7\f\r\x\r\p\b\n\6\g\w\y\6\8\b\7\c\h\c\d\s\4\r\0\w\h\l\c\q\i\8\6\b\m\o\e\n\l\s\d\8\x\k\m\g\s\t\z\c\w\9\2\0\i\c\m\w\j\3\z\d\a\m\o\8\2\h\8\x\7\2\a\0\p\w\k\b\7\i\2\2\j\h\m\p\5\s\o\3\2\q\m\t\y\b\0\o\d\u\e\5\z\l\g\2\v\0\0\e\9\7\y\s\8\z\y\k\s\v\b\7\a\y\6\1\y\y\b\z\d\p\4\p\f\d\q\l\o\8\g\c\8\4\s\u\v\q\f\x\p\b\u\c\v\4\m\t\u\y\d\x\s\s\r\f\2\b\2\l\l\3\7\9\9\9\j\6\z\e\0\k\9\v\z\n\x\z\z\u\n\2\g\p\i\7\t\v\b\0\9\y\l\p\w\z\p\k\7\q\z\f\7\t\v\6\c\n\w\c\1\e\z\r\t\e\0\t\7\p\c\a\w\6\y\y\x\h\s\4\7\h\v\f\l\k\3\1\f\0\9\y\a\p\8\4\s\y\c\s\2\t\c\z\t\y\1\4\l\z\e\1\1\n\2\w\l\c\l\q\5\7\f\q\8\5\s\2\9\4\s\n\x\h\c\i\7\c\w\y\n\j\n\t\g\a\1\w\u\7 ]] 00:07:34.322 18:24:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:34.322 18:24:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:34.322 [2024-12-08 18:24:52.081784] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:34.322 [2024-12-08 18:24:52.081885] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72361 ] 00:07:34.322 [2024-12-08 18:24:52.218785] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.581 [2024-12-08 18:24:52.271546] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.581 [2024-12-08 18:24:52.320301] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.581  [2024-12-08T18:24:52.771Z] Copying: 512/512 [B] (average 500 kBps) 00:07:34.841 00:07:34.841 18:24:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 5oq4enl4r7j3dkobbi6vbce6et1oyzp3v6zldztxll38zkw5fdye7il3ylzvktv1xnxma4aibubzoased4u8hfgv5h3t7rvz7awtf6v8c7wzp88s57b7egozk32jgil9h59mm5pyvntf7q24csuqxmw34u4ir7og3kbjcv4ng17eq32xw0ml0bp51yjlgaychp4ldyhx9qz7frxrpbn6gwy68b7chcds4r0whlcqi86bmoenlsd8xkmgstzcw920icmwj3zdamo82h8x72a0pwkb7i22jhmp5so32qmtyb0odue5zlg2v00e97ys8zyksvb7ay61yybzdp4pfdqlo8gc84suvqfxpbucv4mtuydxssrf2b2ll37999j6ze0k9vznxzzun2gpi7tvb09ylpwzpk7qzf7tv6cnwc1ezrte0t7pcaw6yyxhs47hvflk31f09yap84sycs2tczty14lze11n2wlclq57fq85s294snxhci7cwynjntga1wu7 == \5\o\q\4\e\n\l\4\r\7\j\3\d\k\o\b\b\i\6\v\b\c\e\6\e\t\1\o\y\z\p\3\v\6\z\l\d\z\t\x\l\l\3\8\z\k\w\5\f\d\y\e\7\i\l\3\y\l\z\v\k\t\v\1\x\n\x\m\a\4\a\i\b\u\b\z\o\a\s\e\d\4\u\8\h\f\g\v\5\h\3\t\7\r\v\z\7\a\w\t\f\6\v\8\c\7\w\z\p\8\8\s\5\7\b\7\e\g\o\z\k\3\2\j\g\i\l\9\h\5\9\m\m\5\p\y\v\n\t\f\7\q\2\4\c\s\u\q\x\m\w\3\4\u\4\i\r\7\o\g\3\k\b\j\c\v\4\n\g\1\7\e\q\3\2\x\w\0\m\l\0\b\p\5\1\y\j\l\g\a\y\c\h\p\4\l\d\y\h\x\9\q\z\7\f\r\x\r\p\b\n\6\g\w\y\6\8\b\7\c\h\c\d\s\4\r\0\w\h\l\c\q\i\8\6\b\m\o\e\n\l\s\d\8\x\k\m\g\s\t\z\c\w\9\2\0\i\c\m\w\j\3\z\d\a\m\o\8\2\h\8\x\7\2\a\0\p\w\k\b\7\i\2\2\j\h\m\p\5\s\o\3\2\q\m\t\y\b\0\o\d\u\e\5\z\l\g\2\v\0\0\e\9\7\y\s\8\z\y\k\s\v\b\7\a\y\6\1\y\y\b\z\d\p\4\p\f\d\q\l\o\8\g\c\8\4\s\u\v\q\f\x\p\b\u\c\v\4\m\t\u\y\d\x\s\s\r\f\2\b\2\l\l\3\7\9\9\9\j\6\z\e\0\k\9\v\z\n\x\z\z\u\n\2\g\p\i\7\t\v\b\0\9\y\l\p\w\z\p\k\7\q\z\f\7\t\v\6\c\n\w\c\1\e\z\r\t\e\0\t\7\p\c\a\w\6\y\y\x\h\s\4\7\h\v\f\l\k\3\1\f\0\9\y\a\p\8\4\s\y\c\s\2\t\c\z\t\y\1\4\l\z\e\1\1\n\2\w\l\c\l\q\5\7\f\q\8\5\s\2\9\4\s\n\x\h\c\i\7\c\w\y\n\j\n\t\g\a\1\w\u\7 ]] 00:07:34.841 18:24:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:34.841 18:24:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:34.841 [2024-12-08 18:24:52.590344] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:34.841 [2024-12-08 18:24:52.590459] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72365 ] 00:07:34.841 [2024-12-08 18:24:52.727649] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.101 [2024-12-08 18:24:52.782177] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.101 [2024-12-08 18:24:52.834576] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.101  [2024-12-08T18:24:53.297Z] Copying: 512/512 [B] (average 166 kBps) 00:07:35.367 00:07:35.367 18:24:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 5oq4enl4r7j3dkobbi6vbce6et1oyzp3v6zldztxll38zkw5fdye7il3ylzvktv1xnxma4aibubzoased4u8hfgv5h3t7rvz7awtf6v8c7wzp88s57b7egozk32jgil9h59mm5pyvntf7q24csuqxmw34u4ir7og3kbjcv4ng17eq32xw0ml0bp51yjlgaychp4ldyhx9qz7frxrpbn6gwy68b7chcds4r0whlcqi86bmoenlsd8xkmgstzcw920icmwj3zdamo82h8x72a0pwkb7i22jhmp5so32qmtyb0odue5zlg2v00e97ys8zyksvb7ay61yybzdp4pfdqlo8gc84suvqfxpbucv4mtuydxssrf2b2ll37999j6ze0k9vznxzzun2gpi7tvb09ylpwzpk7qzf7tv6cnwc1ezrte0t7pcaw6yyxhs47hvflk31f09yap84sycs2tczty14lze11n2wlclq57fq85s294snxhci7cwynjntga1wu7 == \5\o\q\4\e\n\l\4\r\7\j\3\d\k\o\b\b\i\6\v\b\c\e\6\e\t\1\o\y\z\p\3\v\6\z\l\d\z\t\x\l\l\3\8\z\k\w\5\f\d\y\e\7\i\l\3\y\l\z\v\k\t\v\1\x\n\x\m\a\4\a\i\b\u\b\z\o\a\s\e\d\4\u\8\h\f\g\v\5\h\3\t\7\r\v\z\7\a\w\t\f\6\v\8\c\7\w\z\p\8\8\s\5\7\b\7\e\g\o\z\k\3\2\j\g\i\l\9\h\5\9\m\m\5\p\y\v\n\t\f\7\q\2\4\c\s\u\q\x\m\w\3\4\u\4\i\r\7\o\g\3\k\b\j\c\v\4\n\g\1\7\e\q\3\2\x\w\0\m\l\0\b\p\5\1\y\j\l\g\a\y\c\h\p\4\l\d\y\h\x\9\q\z\7\f\r\x\r\p\b\n\6\g\w\y\6\8\b\7\c\h\c\d\s\4\r\0\w\h\l\c\q\i\8\6\b\m\o\e\n\l\s\d\8\x\k\m\g\s\t\z\c\w\9\2\0\i\c\m\w\j\3\z\d\a\m\o\8\2\h\8\x\7\2\a\0\p\w\k\b\7\i\2\2\j\h\m\p\5\s\o\3\2\q\m\t\y\b\0\o\d\u\e\5\z\l\g\2\v\0\0\e\9\7\y\s\8\z\y\k\s\v\b\7\a\y\6\1\y\y\b\z\d\p\4\p\f\d\q\l\o\8\g\c\8\4\s\u\v\q\f\x\p\b\u\c\v\4\m\t\u\y\d\x\s\s\r\f\2\b\2\l\l\3\7\9\9\9\j\6\z\e\0\k\9\v\z\n\x\z\z\u\n\2\g\p\i\7\t\v\b\0\9\y\l\p\w\z\p\k\7\q\z\f\7\t\v\6\c\n\w\c\1\e\z\r\t\e\0\t\7\p\c\a\w\6\y\y\x\h\s\4\7\h\v\f\l\k\3\1\f\0\9\y\a\p\8\4\s\y\c\s\2\t\c\z\t\y\1\4\l\z\e\1\1\n\2\w\l\c\l\q\5\7\f\q\8\5\s\2\9\4\s\n\x\h\c\i\7\c\w\y\n\j\n\t\g\a\1\w\u\7 ]] 00:07:35.367 18:24:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:35.367 18:24:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:35.367 [2024-12-08 18:24:53.114861] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:35.367 [2024-12-08 18:24:53.114960] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72380 ] 00:07:35.367 [2024-12-08 18:24:53.246764] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.628 [2024-12-08 18:24:53.296424] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.628 [2024-12-08 18:24:53.346825] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.628  [2024-12-08T18:24:53.558Z] Copying: 512/512 [B] (average 166 kBps) 00:07:35.628 00:07:35.628 18:24:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 5oq4enl4r7j3dkobbi6vbce6et1oyzp3v6zldztxll38zkw5fdye7il3ylzvktv1xnxma4aibubzoased4u8hfgv5h3t7rvz7awtf6v8c7wzp88s57b7egozk32jgil9h59mm5pyvntf7q24csuqxmw34u4ir7og3kbjcv4ng17eq32xw0ml0bp51yjlgaychp4ldyhx9qz7frxrpbn6gwy68b7chcds4r0whlcqi86bmoenlsd8xkmgstzcw920icmwj3zdamo82h8x72a0pwkb7i22jhmp5so32qmtyb0odue5zlg2v00e97ys8zyksvb7ay61yybzdp4pfdqlo8gc84suvqfxpbucv4mtuydxssrf2b2ll37999j6ze0k9vznxzzun2gpi7tvb09ylpwzpk7qzf7tv6cnwc1ezrte0t7pcaw6yyxhs47hvflk31f09yap84sycs2tczty14lze11n2wlclq57fq85s294snxhci7cwynjntga1wu7 == \5\o\q\4\e\n\l\4\r\7\j\3\d\k\o\b\b\i\6\v\b\c\e\6\e\t\1\o\y\z\p\3\v\6\z\l\d\z\t\x\l\l\3\8\z\k\w\5\f\d\y\e\7\i\l\3\y\l\z\v\k\t\v\1\x\n\x\m\a\4\a\i\b\u\b\z\o\a\s\e\d\4\u\8\h\f\g\v\5\h\3\t\7\r\v\z\7\a\w\t\f\6\v\8\c\7\w\z\p\8\8\s\5\7\b\7\e\g\o\z\k\3\2\j\g\i\l\9\h\5\9\m\m\5\p\y\v\n\t\f\7\q\2\4\c\s\u\q\x\m\w\3\4\u\4\i\r\7\o\g\3\k\b\j\c\v\4\n\g\1\7\e\q\3\2\x\w\0\m\l\0\b\p\5\1\y\j\l\g\a\y\c\h\p\4\l\d\y\h\x\9\q\z\7\f\r\x\r\p\b\n\6\g\w\y\6\8\b\7\c\h\c\d\s\4\r\0\w\h\l\c\q\i\8\6\b\m\o\e\n\l\s\d\8\x\k\m\g\s\t\z\c\w\9\2\0\i\c\m\w\j\3\z\d\a\m\o\8\2\h\8\x\7\2\a\0\p\w\k\b\7\i\2\2\j\h\m\p\5\s\o\3\2\q\m\t\y\b\0\o\d\u\e\5\z\l\g\2\v\0\0\e\9\7\y\s\8\z\y\k\s\v\b\7\a\y\6\1\y\y\b\z\d\p\4\p\f\d\q\l\o\8\g\c\8\4\s\u\v\q\f\x\p\b\u\c\v\4\m\t\u\y\d\x\s\s\r\f\2\b\2\l\l\3\7\9\9\9\j\6\z\e\0\k\9\v\z\n\x\z\z\u\n\2\g\p\i\7\t\v\b\0\9\y\l\p\w\z\p\k\7\q\z\f\7\t\v\6\c\n\w\c\1\e\z\r\t\e\0\t\7\p\c\a\w\6\y\y\x\h\s\4\7\h\v\f\l\k\3\1\f\0\9\y\a\p\8\4\s\y\c\s\2\t\c\z\t\y\1\4\l\z\e\1\1\n\2\w\l\c\l\q\5\7\f\q\8\5\s\2\9\4\s\n\x\h\c\i\7\c\w\y\n\j\n\t\g\a\1\w\u\7 ]] 00:07:35.628 18:24:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:35.628 18:24:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:35.628 18:24:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:35.628 18:24:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:35.887 18:24:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:35.887 18:24:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:35.887 [2024-12-08 18:24:53.617906] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:35.887 [2024-12-08 18:24:53.618005] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72384 ] 00:07:35.887 [2024-12-08 18:24:53.754966] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.887 [2024-12-08 18:24:53.808589] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.146 [2024-12-08 18:24:53.860959] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.146  [2024-12-08T18:24:54.335Z] Copying: 512/512 [B] (average 500 kBps) 00:07:36.405 00:07:36.406 18:24:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 74eh0gg3pnhh7j9rhcmodpwzgxlghk0v1g3ebqd4o6it0ztjcox25ff7n98mivjuln4us6fn57l98yv9prphygnaz09tl007usnkjfl8nmyo5a02vcxesg8mh0ep1i58qbjsydpssrl7jn610rc7rawrl5vb73lqb57g8qqhhlqhq5x4xbtlk0i6difxlp87ky9vlm55lbqrthdws4tshrvpxtn37v1tytkt47wrihxqrm3ekzta2b6o4z449nvddlhdwcrb0ucadq931509yfuzzkrkpfby5xb2ardg2j13svip35hl1qrj8i8vmdxnmws967xs80vcbi7eexlatxeo48brs1bc1wze2ih8suoktuas8xrozcrc01xeup7xbhvz9emi5vwbn3c11xh3u1z5i964jpe65g2nwpzwa3gkefylrbg2gs2314jl7ykm2m1uamd3m1dd57tvgdj5c7ecnc7t5xd8jsth0s3bqk9avh30fx5nu8bwwu7y88q4 == \7\4\e\h\0\g\g\3\p\n\h\h\7\j\9\r\h\c\m\o\d\p\w\z\g\x\l\g\h\k\0\v\1\g\3\e\b\q\d\4\o\6\i\t\0\z\t\j\c\o\x\2\5\f\f\7\n\9\8\m\i\v\j\u\l\n\4\u\s\6\f\n\5\7\l\9\8\y\v\9\p\r\p\h\y\g\n\a\z\0\9\t\l\0\0\7\u\s\n\k\j\f\l\8\n\m\y\o\5\a\0\2\v\c\x\e\s\g\8\m\h\0\e\p\1\i\5\8\q\b\j\s\y\d\p\s\s\r\l\7\j\n\6\1\0\r\c\7\r\a\w\r\l\5\v\b\7\3\l\q\b\5\7\g\8\q\q\h\h\l\q\h\q\5\x\4\x\b\t\l\k\0\i\6\d\i\f\x\l\p\8\7\k\y\9\v\l\m\5\5\l\b\q\r\t\h\d\w\s\4\t\s\h\r\v\p\x\t\n\3\7\v\1\t\y\t\k\t\4\7\w\r\i\h\x\q\r\m\3\e\k\z\t\a\2\b\6\o\4\z\4\4\9\n\v\d\d\l\h\d\w\c\r\b\0\u\c\a\d\q\9\3\1\5\0\9\y\f\u\z\z\k\r\k\p\f\b\y\5\x\b\2\a\r\d\g\2\j\1\3\s\v\i\p\3\5\h\l\1\q\r\j\8\i\8\v\m\d\x\n\m\w\s\9\6\7\x\s\8\0\v\c\b\i\7\e\e\x\l\a\t\x\e\o\4\8\b\r\s\1\b\c\1\w\z\e\2\i\h\8\s\u\o\k\t\u\a\s\8\x\r\o\z\c\r\c\0\1\x\e\u\p\7\x\b\h\v\z\9\e\m\i\5\v\w\b\n\3\c\1\1\x\h\3\u\1\z\5\i\9\6\4\j\p\e\6\5\g\2\n\w\p\z\w\a\3\g\k\e\f\y\l\r\b\g\2\g\s\2\3\1\4\j\l\7\y\k\m\2\m\1\u\a\m\d\3\m\1\d\d\5\7\t\v\g\d\j\5\c\7\e\c\n\c\7\t\5\x\d\8\j\s\t\h\0\s\3\b\q\k\9\a\v\h\3\0\f\x\5\n\u\8\b\w\w\u\7\y\8\8\q\4 ]] 00:07:36.406 18:24:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:36.406 18:24:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:36.406 [2024-12-08 18:24:54.134856] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:36.406 [2024-12-08 18:24:54.134954] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72399 ] 00:07:36.406 [2024-12-08 18:24:54.266998] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.406 [2024-12-08 18:24:54.317097] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.665 [2024-12-08 18:24:54.367539] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.665  [2024-12-08T18:24:54.595Z] Copying: 512/512 [B] (average 500 kBps) 00:07:36.665 00:07:36.665 18:24:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 74eh0gg3pnhh7j9rhcmodpwzgxlghk0v1g3ebqd4o6it0ztjcox25ff7n98mivjuln4us6fn57l98yv9prphygnaz09tl007usnkjfl8nmyo5a02vcxesg8mh0ep1i58qbjsydpssrl7jn610rc7rawrl5vb73lqb57g8qqhhlqhq5x4xbtlk0i6difxlp87ky9vlm55lbqrthdws4tshrvpxtn37v1tytkt47wrihxqrm3ekzta2b6o4z449nvddlhdwcrb0ucadq931509yfuzzkrkpfby5xb2ardg2j13svip35hl1qrj8i8vmdxnmws967xs80vcbi7eexlatxeo48brs1bc1wze2ih8suoktuas8xrozcrc01xeup7xbhvz9emi5vwbn3c11xh3u1z5i964jpe65g2nwpzwa3gkefylrbg2gs2314jl7ykm2m1uamd3m1dd57tvgdj5c7ecnc7t5xd8jsth0s3bqk9avh30fx5nu8bwwu7y88q4 == \7\4\e\h\0\g\g\3\p\n\h\h\7\j\9\r\h\c\m\o\d\p\w\z\g\x\l\g\h\k\0\v\1\g\3\e\b\q\d\4\o\6\i\t\0\z\t\j\c\o\x\2\5\f\f\7\n\9\8\m\i\v\j\u\l\n\4\u\s\6\f\n\5\7\l\9\8\y\v\9\p\r\p\h\y\g\n\a\z\0\9\t\l\0\0\7\u\s\n\k\j\f\l\8\n\m\y\o\5\a\0\2\v\c\x\e\s\g\8\m\h\0\e\p\1\i\5\8\q\b\j\s\y\d\p\s\s\r\l\7\j\n\6\1\0\r\c\7\r\a\w\r\l\5\v\b\7\3\l\q\b\5\7\g\8\q\q\h\h\l\q\h\q\5\x\4\x\b\t\l\k\0\i\6\d\i\f\x\l\p\8\7\k\y\9\v\l\m\5\5\l\b\q\r\t\h\d\w\s\4\t\s\h\r\v\p\x\t\n\3\7\v\1\t\y\t\k\t\4\7\w\r\i\h\x\q\r\m\3\e\k\z\t\a\2\b\6\o\4\z\4\4\9\n\v\d\d\l\h\d\w\c\r\b\0\u\c\a\d\q\9\3\1\5\0\9\y\f\u\z\z\k\r\k\p\f\b\y\5\x\b\2\a\r\d\g\2\j\1\3\s\v\i\p\3\5\h\l\1\q\r\j\8\i\8\v\m\d\x\n\m\w\s\9\6\7\x\s\8\0\v\c\b\i\7\e\e\x\l\a\t\x\e\o\4\8\b\r\s\1\b\c\1\w\z\e\2\i\h\8\s\u\o\k\t\u\a\s\8\x\r\o\z\c\r\c\0\1\x\e\u\p\7\x\b\h\v\z\9\e\m\i\5\v\w\b\n\3\c\1\1\x\h\3\u\1\z\5\i\9\6\4\j\p\e\6\5\g\2\n\w\p\z\w\a\3\g\k\e\f\y\l\r\b\g\2\g\s\2\3\1\4\j\l\7\y\k\m\2\m\1\u\a\m\d\3\m\1\d\d\5\7\t\v\g\d\j\5\c\7\e\c\n\c\7\t\5\x\d\8\j\s\t\h\0\s\3\b\q\k\9\a\v\h\3\0\f\x\5\n\u\8\b\w\w\u\7\y\8\8\q\4 ]] 00:07:36.665 18:24:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:36.665 18:24:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:36.924 [2024-12-08 18:24:54.639681] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:36.924 [2024-12-08 18:24:54.639778] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72403 ] 00:07:36.924 [2024-12-08 18:24:54.775939] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.924 [2024-12-08 18:24:54.829111] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.182 [2024-12-08 18:24:54.881739] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.182  [2024-12-08T18:24:55.112Z] Copying: 512/512 [B] (average 250 kBps) 00:07:37.182 00:07:37.182 18:24:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 74eh0gg3pnhh7j9rhcmodpwzgxlghk0v1g3ebqd4o6it0ztjcox25ff7n98mivjuln4us6fn57l98yv9prphygnaz09tl007usnkjfl8nmyo5a02vcxesg8mh0ep1i58qbjsydpssrl7jn610rc7rawrl5vb73lqb57g8qqhhlqhq5x4xbtlk0i6difxlp87ky9vlm55lbqrthdws4tshrvpxtn37v1tytkt47wrihxqrm3ekzta2b6o4z449nvddlhdwcrb0ucadq931509yfuzzkrkpfby5xb2ardg2j13svip35hl1qrj8i8vmdxnmws967xs80vcbi7eexlatxeo48brs1bc1wze2ih8suoktuas8xrozcrc01xeup7xbhvz9emi5vwbn3c11xh3u1z5i964jpe65g2nwpzwa3gkefylrbg2gs2314jl7ykm2m1uamd3m1dd57tvgdj5c7ecnc7t5xd8jsth0s3bqk9avh30fx5nu8bwwu7y88q4 == \7\4\e\h\0\g\g\3\p\n\h\h\7\j\9\r\h\c\m\o\d\p\w\z\g\x\l\g\h\k\0\v\1\g\3\e\b\q\d\4\o\6\i\t\0\z\t\j\c\o\x\2\5\f\f\7\n\9\8\m\i\v\j\u\l\n\4\u\s\6\f\n\5\7\l\9\8\y\v\9\p\r\p\h\y\g\n\a\z\0\9\t\l\0\0\7\u\s\n\k\j\f\l\8\n\m\y\o\5\a\0\2\v\c\x\e\s\g\8\m\h\0\e\p\1\i\5\8\q\b\j\s\y\d\p\s\s\r\l\7\j\n\6\1\0\r\c\7\r\a\w\r\l\5\v\b\7\3\l\q\b\5\7\g\8\q\q\h\h\l\q\h\q\5\x\4\x\b\t\l\k\0\i\6\d\i\f\x\l\p\8\7\k\y\9\v\l\m\5\5\l\b\q\r\t\h\d\w\s\4\t\s\h\r\v\p\x\t\n\3\7\v\1\t\y\t\k\t\4\7\w\r\i\h\x\q\r\m\3\e\k\z\t\a\2\b\6\o\4\z\4\4\9\n\v\d\d\l\h\d\w\c\r\b\0\u\c\a\d\q\9\3\1\5\0\9\y\f\u\z\z\k\r\k\p\f\b\y\5\x\b\2\a\r\d\g\2\j\1\3\s\v\i\p\3\5\h\l\1\q\r\j\8\i\8\v\m\d\x\n\m\w\s\9\6\7\x\s\8\0\v\c\b\i\7\e\e\x\l\a\t\x\e\o\4\8\b\r\s\1\b\c\1\w\z\e\2\i\h\8\s\u\o\k\t\u\a\s\8\x\r\o\z\c\r\c\0\1\x\e\u\p\7\x\b\h\v\z\9\e\m\i\5\v\w\b\n\3\c\1\1\x\h\3\u\1\z\5\i\9\6\4\j\p\e\6\5\g\2\n\w\p\z\w\a\3\g\k\e\f\y\l\r\b\g\2\g\s\2\3\1\4\j\l\7\y\k\m\2\m\1\u\a\m\d\3\m\1\d\d\5\7\t\v\g\d\j\5\c\7\e\c\n\c\7\t\5\x\d\8\j\s\t\h\0\s\3\b\q\k\9\a\v\h\3\0\f\x\5\n\u\8\b\w\w\u\7\y\8\8\q\4 ]] 00:07:37.183 18:24:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:37.183 18:24:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:37.444 [2024-12-08 18:24:55.144921] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:37.444 [2024-12-08 18:24:55.145024] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72418 ] 00:07:37.444 [2024-12-08 18:24:55.278726] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.444 [2024-12-08 18:24:55.330186] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.707 [2024-12-08 18:24:55.379145] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.707  [2024-12-08T18:24:55.637Z] Copying: 512/512 [B] (average 250 kBps) 00:07:37.707 00:07:37.707 ************************************ 00:07:37.707 END TEST dd_flags_misc 00:07:37.707 ************************************ 00:07:37.707 18:24:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 74eh0gg3pnhh7j9rhcmodpwzgxlghk0v1g3ebqd4o6it0ztjcox25ff7n98mivjuln4us6fn57l98yv9prphygnaz09tl007usnkjfl8nmyo5a02vcxesg8mh0ep1i58qbjsydpssrl7jn610rc7rawrl5vb73lqb57g8qqhhlqhq5x4xbtlk0i6difxlp87ky9vlm55lbqrthdws4tshrvpxtn37v1tytkt47wrihxqrm3ekzta2b6o4z449nvddlhdwcrb0ucadq931509yfuzzkrkpfby5xb2ardg2j13svip35hl1qrj8i8vmdxnmws967xs80vcbi7eexlatxeo48brs1bc1wze2ih8suoktuas8xrozcrc01xeup7xbhvz9emi5vwbn3c11xh3u1z5i964jpe65g2nwpzwa3gkefylrbg2gs2314jl7ykm2m1uamd3m1dd57tvgdj5c7ecnc7t5xd8jsth0s3bqk9avh30fx5nu8bwwu7y88q4 == \7\4\e\h\0\g\g\3\p\n\h\h\7\j\9\r\h\c\m\o\d\p\w\z\g\x\l\g\h\k\0\v\1\g\3\e\b\q\d\4\o\6\i\t\0\z\t\j\c\o\x\2\5\f\f\7\n\9\8\m\i\v\j\u\l\n\4\u\s\6\f\n\5\7\l\9\8\y\v\9\p\r\p\h\y\g\n\a\z\0\9\t\l\0\0\7\u\s\n\k\j\f\l\8\n\m\y\o\5\a\0\2\v\c\x\e\s\g\8\m\h\0\e\p\1\i\5\8\q\b\j\s\y\d\p\s\s\r\l\7\j\n\6\1\0\r\c\7\r\a\w\r\l\5\v\b\7\3\l\q\b\5\7\g\8\q\q\h\h\l\q\h\q\5\x\4\x\b\t\l\k\0\i\6\d\i\f\x\l\p\8\7\k\y\9\v\l\m\5\5\l\b\q\r\t\h\d\w\s\4\t\s\h\r\v\p\x\t\n\3\7\v\1\t\y\t\k\t\4\7\w\r\i\h\x\q\r\m\3\e\k\z\t\a\2\b\6\o\4\z\4\4\9\n\v\d\d\l\h\d\w\c\r\b\0\u\c\a\d\q\9\3\1\5\0\9\y\f\u\z\z\k\r\k\p\f\b\y\5\x\b\2\a\r\d\g\2\j\1\3\s\v\i\p\3\5\h\l\1\q\r\j\8\i\8\v\m\d\x\n\m\w\s\9\6\7\x\s\8\0\v\c\b\i\7\e\e\x\l\a\t\x\e\o\4\8\b\r\s\1\b\c\1\w\z\e\2\i\h\8\s\u\o\k\t\u\a\s\8\x\r\o\z\c\r\c\0\1\x\e\u\p\7\x\b\h\v\z\9\e\m\i\5\v\w\b\n\3\c\1\1\x\h\3\u\1\z\5\i\9\6\4\j\p\e\6\5\g\2\n\w\p\z\w\a\3\g\k\e\f\y\l\r\b\g\2\g\s\2\3\1\4\j\l\7\y\k\m\2\m\1\u\a\m\d\3\m\1\d\d\5\7\t\v\g\d\j\5\c\7\e\c\n\c\7\t\5\x\d\8\j\s\t\h\0\s\3\b\q\k\9\a\v\h\3\0\f\x\5\n\u\8\b\w\w\u\7\y\8\8\q\4 ]] 00:07:37.707 00:07:37.707 real 0m4.080s 00:07:37.707 user 0m2.094s 00:07:37.707 sys 0m2.121s 00:07:37.707 18:24:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.707 18:24:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:37.707 18:24:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:37.707 18:24:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:37.707 * Second test run, disabling liburing, forcing AIO 00:07:37.707 18:24:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:37.707 18:24:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:37.707 18:24:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:37.707 18:24:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.707 18:24:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:37.966 ************************************ 00:07:37.966 START TEST dd_flag_append_forced_aio 00:07:37.966 ************************************ 00:07:37.966 18:24:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:07:37.966 18:24:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:37.966 18:24:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:37.966 18:24:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:37.966 18:24:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:37.966 18:24:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:37.966 18:24:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=gr695ykv7ey9c5qh3y3tee48g1n0wpw2 00:07:37.966 18:24:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:37.966 18:24:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:37.966 18:24:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:37.966 18:24:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=t1uj71x1vnn8xmzt4afydqvo93ze9ygp 00:07:37.966 18:24:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s gr695ykv7ey9c5qh3y3tee48g1n0wpw2 00:07:37.966 18:24:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s t1uj71x1vnn8xmzt4afydqvo93ze9ygp 00:07:37.966 18:24:55 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:37.966 [2024-12-08 18:24:55.698798] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:37.966 [2024-12-08 18:24:55.698893] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72442 ] 00:07:37.966 [2024-12-08 18:24:55.835384] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.224 [2024-12-08 18:24:55.895082] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.224 [2024-12-08 18:24:55.944718] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.224  [2024-12-08T18:24:56.413Z] Copying: 32/32 [B] (average 31 kBps) 00:07:38.483 00:07:38.483 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ t1uj71x1vnn8xmzt4afydqvo93ze9ygpgr695ykv7ey9c5qh3y3tee48g1n0wpw2 == \t\1\u\j\7\1\x\1\v\n\n\8\x\m\z\t\4\a\f\y\d\q\v\o\9\3\z\e\9\y\g\p\g\r\6\9\5\y\k\v\7\e\y\9\c\5\q\h\3\y\3\t\e\e\4\8\g\1\n\0\w\p\w\2 ]] 00:07:38.483 00:07:38.483 real 0m0.522s 00:07:38.483 user 0m0.267s 00:07:38.483 sys 0m0.134s 00:07:38.483 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.483 ************************************ 00:07:38.483 END TEST dd_flag_append_forced_aio 00:07:38.483 ************************************ 00:07:38.483 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:38.483 18:24:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:38.483 18:24:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:38.483 18:24:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.483 18:24:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:38.483 ************************************ 00:07:38.483 START TEST dd_flag_directory_forced_aio 00:07:38.483 ************************************ 00:07:38.483 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:07:38.483 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:38.483 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:38.483 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:38.483 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:38.483 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.483 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:38.483 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.483 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:38.483 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.483 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:38.483 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:38.483 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:38.483 [2024-12-08 18:24:56.272551] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:38.483 [2024-12-08 18:24:56.272647] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72473 ] 00:07:38.483 [2024-12-08 18:24:56.408200] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.742 [2024-12-08 18:24:56.459633] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.742 [2024-12-08 18:24:56.508340] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.742 [2024-12-08 18:24:56.536473] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:38.742 [2024-12-08 18:24:56.536525] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:38.742 [2024-12-08 18:24:56.536539] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:38.742 [2024-12-08 18:24:56.641083] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:39.001 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:07:39.001 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:39.001 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:07:39.001 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:39.001 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:39.001 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:39.001 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:39.001 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:39.001 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:39.001 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.001 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.001 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.001 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.001 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.001 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.001 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.001 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:39.001 18:24:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:39.001 [2024-12-08 18:24:56.773271] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:39.001 [2024-12-08 18:24:56.773362] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72483 ] 00:07:39.001 [2024-12-08 18:24:56.906183] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.260 [2024-12-08 18:24:56.961782] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.260 [2024-12-08 18:24:57.012552] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.260 [2024-12-08 18:24:57.041101] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:39.260 [2024-12-08 18:24:57.041152] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:39.260 [2024-12-08 18:24:57.041182] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:39.260 [2024-12-08 18:24:57.149602] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:39.519 00:07:39.519 real 0m1.020s 00:07:39.519 user 0m0.526s 00:07:39.519 sys 0m0.286s 00:07:39.519 ************************************ 00:07:39.519 END TEST dd_flag_directory_forced_aio 00:07:39.519 ************************************ 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:39.519 ************************************ 00:07:39.519 START TEST dd_flag_nofollow_forced_aio 00:07:39.519 ************************************ 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:39.519 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:39.519 [2024-12-08 18:24:57.344219] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:39.519 [2024-12-08 18:24:57.344288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72511 ] 00:07:39.778 [2024-12-08 18:24:57.472765] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.778 [2024-12-08 18:24:57.523243] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.778 [2024-12-08 18:24:57.574521] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.778 [2024-12-08 18:24:57.603057] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:39.778 [2024-12-08 18:24:57.603108] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:39.778 [2024-12-08 18:24:57.603122] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:40.036 [2024-12-08 18:24:57.706549] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:40.036 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:07:40.036 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:40.036 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:07:40.036 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:40.036 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:40.036 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:40.036 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:40.036 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:40.036 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:40.036 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.036 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.036 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.036 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.036 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.036 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.036 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.036 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:40.036 18:24:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:40.036 [2024-12-08 18:24:57.828236] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:40.036 [2024-12-08 18:24:57.828540] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72521 ] 00:07:40.036 [2024-12-08 18:24:57.960602] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.295 [2024-12-08 18:24:58.013724] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.295 [2024-12-08 18:24:58.062335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.295 [2024-12-08 18:24:58.090996] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:40.295 [2024-12-08 18:24:58.091046] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:40.295 [2024-12-08 18:24:58.091061] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:40.295 [2024-12-08 18:24:58.200959] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:40.552 18:24:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:07:40.552 18:24:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:40.552 18:24:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:07:40.553 18:24:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:40.553 18:24:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:40.553 18:24:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:40.553 18:24:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:40.553 18:24:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:40.553 18:24:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:40.553 18:24:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:40.553 [2024-12-08 18:24:58.342062] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:40.553 [2024-12-08 18:24:58.342159] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72528 ] 00:07:40.553 [2024-12-08 18:24:58.469518] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.810 [2024-12-08 18:24:58.524151] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.810 [2024-12-08 18:24:58.574551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.810  [2024-12-08T18:24:58.998Z] Copying: 512/512 [B] (average 500 kBps) 00:07:41.068 00:07:41.068 18:24:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ dddmwbicuyap14g7ced1ot9edwu3tuf6kggqbu9i7u3djla8m1g10alh1mudd60bd5sjbjyn3ra1cdjsv8htwjd6gcxklf1g9da0j5nc3c8es9o98j3esazd2fjtu5xg8eep7wae0igjstsep3he4kt40hd1nfezyxwrc77wcgr9ett1bgi475ccin5woe1dy0w2lnr0nu0qh0g08vexx8xpjd7f2e44gorskkzgfy9nxf9co3payrjdfltmspuan32kfn41c70jtpoiot60k5l3rv1y6gg8nl1pz0o9i9s6aednj7ocxurah5u40esdh6e7r6vi7beqd1536m3ylabw8pyxzkozl0iyyu24du37hg7d0hl8wnu2xc4erded6zbej48blsbuddm55j1u4jibu33dw580apaewkkbe6ct2drdp8mm7yme9brlf3wduqp9ii69ycr7lj319x42o5mhel5ztoh912tybm1vku0jcye4aiytsdupzca5z9ro == \d\d\d\m\w\b\i\c\u\y\a\p\1\4\g\7\c\e\d\1\o\t\9\e\d\w\u\3\t\u\f\6\k\g\g\q\b\u\9\i\7\u\3\d\j\l\a\8\m\1\g\1\0\a\l\h\1\m\u\d\d\6\0\b\d\5\s\j\b\j\y\n\3\r\a\1\c\d\j\s\v\8\h\t\w\j\d\6\g\c\x\k\l\f\1\g\9\d\a\0\j\5\n\c\3\c\8\e\s\9\o\9\8\j\3\e\s\a\z\d\2\f\j\t\u\5\x\g\8\e\e\p\7\w\a\e\0\i\g\j\s\t\s\e\p\3\h\e\4\k\t\4\0\h\d\1\n\f\e\z\y\x\w\r\c\7\7\w\c\g\r\9\e\t\t\1\b\g\i\4\7\5\c\c\i\n\5\w\o\e\1\d\y\0\w\2\l\n\r\0\n\u\0\q\h\0\g\0\8\v\e\x\x\8\x\p\j\d\7\f\2\e\4\4\g\o\r\s\k\k\z\g\f\y\9\n\x\f\9\c\o\3\p\a\y\r\j\d\f\l\t\m\s\p\u\a\n\3\2\k\f\n\4\1\c\7\0\j\t\p\o\i\o\t\6\0\k\5\l\3\r\v\1\y\6\g\g\8\n\l\1\p\z\0\o\9\i\9\s\6\a\e\d\n\j\7\o\c\x\u\r\a\h\5\u\4\0\e\s\d\h\6\e\7\r\6\v\i\7\b\e\q\d\1\5\3\6\m\3\y\l\a\b\w\8\p\y\x\z\k\o\z\l\0\i\y\y\u\2\4\d\u\3\7\h\g\7\d\0\h\l\8\w\n\u\2\x\c\4\e\r\d\e\d\6\z\b\e\j\4\8\b\l\s\b\u\d\d\m\5\5\j\1\u\4\j\i\b\u\3\3\d\w\5\8\0\a\p\a\e\w\k\k\b\e\6\c\t\2\d\r\d\p\8\m\m\7\y\m\e\9\b\r\l\f\3\w\d\u\q\p\9\i\i\6\9\y\c\r\7\l\j\3\1\9\x\4\2\o\5\m\h\e\l\5\z\t\o\h\9\1\2\t\y\b\m\1\v\k\u\0\j\c\y\e\4\a\i\y\t\s\d\u\p\z\c\a\5\z\9\r\o ]] 00:07:41.068 00:07:41.068 real 0m1.522s 00:07:41.068 user 0m0.780s 00:07:41.068 sys 0m0.411s 00:07:41.068 18:24:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.068 18:24:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:41.068 ************************************ 00:07:41.068 END TEST dd_flag_nofollow_forced_aio 00:07:41.068 ************************************ 00:07:41.068 18:24:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:41.069 18:24:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:41.069 18:24:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.069 18:24:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:41.069 ************************************ 00:07:41.069 START TEST dd_flag_noatime_forced_aio 00:07:41.069 ************************************ 00:07:41.069 18:24:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:07:41.069 18:24:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:41.069 18:24:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:41.069 18:24:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:41.069 18:24:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:41.069 18:24:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:41.069 18:24:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:41.069 18:24:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1733682298 00:07:41.069 18:24:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:41.069 18:24:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1733682298 00:07:41.069 18:24:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:42.005 18:24:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:42.263 [2024-12-08 18:24:59.942362] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:42.263 [2024-12-08 18:24:59.942649] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72569 ] 00:07:42.263 [2024-12-08 18:25:00.079283] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.263 [2024-12-08 18:25:00.143551] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.522 [2024-12-08 18:25:00.198773] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.522  [2024-12-08T18:25:00.452Z] Copying: 512/512 [B] (average 500 kBps) 00:07:42.522 00:07:42.522 18:25:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:42.523 18:25:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1733682298 )) 00:07:42.523 18:25:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:42.781 18:25:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1733682298 )) 00:07:42.781 18:25:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:42.781 [2024-12-08 18:25:00.504163] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:42.781 [2024-12-08 18:25:00.504261] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72580 ] 00:07:42.781 [2024-12-08 18:25:00.636679] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.781 [2024-12-08 18:25:00.687387] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.040 [2024-12-08 18:25:00.736468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.040  [2024-12-08T18:25:01.229Z] Copying: 512/512 [B] (average 500 kBps) 00:07:43.299 00:07:43.299 18:25:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:43.299 18:25:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1733682300 )) 00:07:43.299 00:07:43.299 real 0m2.118s 00:07:43.299 user 0m0.559s 00:07:43.299 sys 0m0.319s 00:07:43.299 18:25:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.299 ************************************ 00:07:43.299 END TEST dd_flag_noatime_forced_aio 00:07:43.299 ************************************ 00:07:43.299 18:25:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:43.299 18:25:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:43.299 18:25:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:43.299 18:25:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.299 18:25:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:43.299 ************************************ 00:07:43.299 START TEST dd_flags_misc_forced_aio 00:07:43.299 ************************************ 00:07:43.299 18:25:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:07:43.299 18:25:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:43.299 18:25:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:43.299 18:25:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:43.299 18:25:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:43.299 18:25:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:43.299 18:25:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:43.299 18:25:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:43.299 18:25:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:43.299 18:25:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:43.299 [2024-12-08 18:25:01.100040] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:43.299 [2024-12-08 18:25:01.100283] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72612 ] 00:07:43.558 [2024-12-08 18:25:01.235586] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.558 [2024-12-08 18:25:01.293325] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.558 [2024-12-08 18:25:01.347293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.558  [2024-12-08T18:25:01.748Z] Copying: 512/512 [B] (average 500 kBps) 00:07:43.818 00:07:43.818 18:25:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ c3zytxc4nbt088lfahq888ykzcnuh7120ujp4rord7se25l0org99sy8ipxu1u0guzog6qtz7yxvryzef86fegcqargymgl3469ndn1j6wldtnvdlmrs233fbclx6rvc7mah5as13q7epqgywun36eyuhth6els12votj9zjkybmhh6slx20vtzpp6wp0ih31xbcerd53g80910yut8jx9b53u020tuac6kn9d0z70r17lm5qjagwxfozadbwf9oade5mep409xymioermr2oqv94hmbkh4w4hblo3e4aoleen1iv0asv87zuv74rgzavix6q8ahjon0oecwhroqnwcqgqjiviq4mgfqsu58gctthzz4x9uhnnniwr96180320sdc77mole951qaw0q25g3ounzmj645g9exc9iydwwvl3p36rzbaadv82wwou09txq0zd8pbaopmuw1rjmdh77fyasfmliq1m0obsjdz986ly4s8c8xueddrk04dsxm == \c\3\z\y\t\x\c\4\n\b\t\0\8\8\l\f\a\h\q\8\8\8\y\k\z\c\n\u\h\7\1\2\0\u\j\p\4\r\o\r\d\7\s\e\2\5\l\0\o\r\g\9\9\s\y\8\i\p\x\u\1\u\0\g\u\z\o\g\6\q\t\z\7\y\x\v\r\y\z\e\f\8\6\f\e\g\c\q\a\r\g\y\m\g\l\3\4\6\9\n\d\n\1\j\6\w\l\d\t\n\v\d\l\m\r\s\2\3\3\f\b\c\l\x\6\r\v\c\7\m\a\h\5\a\s\1\3\q\7\e\p\q\g\y\w\u\n\3\6\e\y\u\h\t\h\6\e\l\s\1\2\v\o\t\j\9\z\j\k\y\b\m\h\h\6\s\l\x\2\0\v\t\z\p\p\6\w\p\0\i\h\3\1\x\b\c\e\r\d\5\3\g\8\0\9\1\0\y\u\t\8\j\x\9\b\5\3\u\0\2\0\t\u\a\c\6\k\n\9\d\0\z\7\0\r\1\7\l\m\5\q\j\a\g\w\x\f\o\z\a\d\b\w\f\9\o\a\d\e\5\m\e\p\4\0\9\x\y\m\i\o\e\r\m\r\2\o\q\v\9\4\h\m\b\k\h\4\w\4\h\b\l\o\3\e\4\a\o\l\e\e\n\1\i\v\0\a\s\v\8\7\z\u\v\7\4\r\g\z\a\v\i\x\6\q\8\a\h\j\o\n\0\o\e\c\w\h\r\o\q\n\w\c\q\g\q\j\i\v\i\q\4\m\g\f\q\s\u\5\8\g\c\t\t\h\z\z\4\x\9\u\h\n\n\n\i\w\r\9\6\1\8\0\3\2\0\s\d\c\7\7\m\o\l\e\9\5\1\q\a\w\0\q\2\5\g\3\o\u\n\z\m\j\6\4\5\g\9\e\x\c\9\i\y\d\w\w\v\l\3\p\3\6\r\z\b\a\a\d\v\8\2\w\w\o\u\0\9\t\x\q\0\z\d\8\p\b\a\o\p\m\u\w\1\r\j\m\d\h\7\7\f\y\a\s\f\m\l\i\q\1\m\0\o\b\s\j\d\z\9\8\6\l\y\4\s\8\c\8\x\u\e\d\d\r\k\0\4\d\s\x\m ]] 00:07:43.818 18:25:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:43.818 18:25:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:43.818 [2024-12-08 18:25:01.617058] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:43.818 [2024-12-08 18:25:01.617155] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72614 ] 00:07:43.818 [2024-12-08 18:25:01.744433] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.078 [2024-12-08 18:25:01.805976] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.078 [2024-12-08 18:25:01.859735] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.078  [2024-12-08T18:25:02.267Z] Copying: 512/512 [B] (average 500 kBps) 00:07:44.337 00:07:44.337 18:25:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ c3zytxc4nbt088lfahq888ykzcnuh7120ujp4rord7se25l0org99sy8ipxu1u0guzog6qtz7yxvryzef86fegcqargymgl3469ndn1j6wldtnvdlmrs233fbclx6rvc7mah5as13q7epqgywun36eyuhth6els12votj9zjkybmhh6slx20vtzpp6wp0ih31xbcerd53g80910yut8jx9b53u020tuac6kn9d0z70r17lm5qjagwxfozadbwf9oade5mep409xymioermr2oqv94hmbkh4w4hblo3e4aoleen1iv0asv87zuv74rgzavix6q8ahjon0oecwhroqnwcqgqjiviq4mgfqsu58gctthzz4x9uhnnniwr96180320sdc77mole951qaw0q25g3ounzmj645g9exc9iydwwvl3p36rzbaadv82wwou09txq0zd8pbaopmuw1rjmdh77fyasfmliq1m0obsjdz986ly4s8c8xueddrk04dsxm == \c\3\z\y\t\x\c\4\n\b\t\0\8\8\l\f\a\h\q\8\8\8\y\k\z\c\n\u\h\7\1\2\0\u\j\p\4\r\o\r\d\7\s\e\2\5\l\0\o\r\g\9\9\s\y\8\i\p\x\u\1\u\0\g\u\z\o\g\6\q\t\z\7\y\x\v\r\y\z\e\f\8\6\f\e\g\c\q\a\r\g\y\m\g\l\3\4\6\9\n\d\n\1\j\6\w\l\d\t\n\v\d\l\m\r\s\2\3\3\f\b\c\l\x\6\r\v\c\7\m\a\h\5\a\s\1\3\q\7\e\p\q\g\y\w\u\n\3\6\e\y\u\h\t\h\6\e\l\s\1\2\v\o\t\j\9\z\j\k\y\b\m\h\h\6\s\l\x\2\0\v\t\z\p\p\6\w\p\0\i\h\3\1\x\b\c\e\r\d\5\3\g\8\0\9\1\0\y\u\t\8\j\x\9\b\5\3\u\0\2\0\t\u\a\c\6\k\n\9\d\0\z\7\0\r\1\7\l\m\5\q\j\a\g\w\x\f\o\z\a\d\b\w\f\9\o\a\d\e\5\m\e\p\4\0\9\x\y\m\i\o\e\r\m\r\2\o\q\v\9\4\h\m\b\k\h\4\w\4\h\b\l\o\3\e\4\a\o\l\e\e\n\1\i\v\0\a\s\v\8\7\z\u\v\7\4\r\g\z\a\v\i\x\6\q\8\a\h\j\o\n\0\o\e\c\w\h\r\o\q\n\w\c\q\g\q\j\i\v\i\q\4\m\g\f\q\s\u\5\8\g\c\t\t\h\z\z\4\x\9\u\h\n\n\n\i\w\r\9\6\1\8\0\3\2\0\s\d\c\7\7\m\o\l\e\9\5\1\q\a\w\0\q\2\5\g\3\o\u\n\z\m\j\6\4\5\g\9\e\x\c\9\i\y\d\w\w\v\l\3\p\3\6\r\z\b\a\a\d\v\8\2\w\w\o\u\0\9\t\x\q\0\z\d\8\p\b\a\o\p\m\u\w\1\r\j\m\d\h\7\7\f\y\a\s\f\m\l\i\q\1\m\0\o\b\s\j\d\z\9\8\6\l\y\4\s\8\c\8\x\u\e\d\d\r\k\0\4\d\s\x\m ]] 00:07:44.337 18:25:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:44.337 18:25:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:44.337 [2024-12-08 18:25:02.154444] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:44.337 [2024-12-08 18:25:02.154549] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72627 ] 00:07:44.595 [2024-12-08 18:25:02.288036] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.595 [2024-12-08 18:25:02.341487] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.595 [2024-12-08 18:25:02.395643] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.595  [2024-12-08T18:25:02.783Z] Copying: 512/512 [B] (average 500 kBps) 00:07:44.853 00:07:44.853 18:25:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ c3zytxc4nbt088lfahq888ykzcnuh7120ujp4rord7se25l0org99sy8ipxu1u0guzog6qtz7yxvryzef86fegcqargymgl3469ndn1j6wldtnvdlmrs233fbclx6rvc7mah5as13q7epqgywun36eyuhth6els12votj9zjkybmhh6slx20vtzpp6wp0ih31xbcerd53g80910yut8jx9b53u020tuac6kn9d0z70r17lm5qjagwxfozadbwf9oade5mep409xymioermr2oqv94hmbkh4w4hblo3e4aoleen1iv0asv87zuv74rgzavix6q8ahjon0oecwhroqnwcqgqjiviq4mgfqsu58gctthzz4x9uhnnniwr96180320sdc77mole951qaw0q25g3ounzmj645g9exc9iydwwvl3p36rzbaadv82wwou09txq0zd8pbaopmuw1rjmdh77fyasfmliq1m0obsjdz986ly4s8c8xueddrk04dsxm == \c\3\z\y\t\x\c\4\n\b\t\0\8\8\l\f\a\h\q\8\8\8\y\k\z\c\n\u\h\7\1\2\0\u\j\p\4\r\o\r\d\7\s\e\2\5\l\0\o\r\g\9\9\s\y\8\i\p\x\u\1\u\0\g\u\z\o\g\6\q\t\z\7\y\x\v\r\y\z\e\f\8\6\f\e\g\c\q\a\r\g\y\m\g\l\3\4\6\9\n\d\n\1\j\6\w\l\d\t\n\v\d\l\m\r\s\2\3\3\f\b\c\l\x\6\r\v\c\7\m\a\h\5\a\s\1\3\q\7\e\p\q\g\y\w\u\n\3\6\e\y\u\h\t\h\6\e\l\s\1\2\v\o\t\j\9\z\j\k\y\b\m\h\h\6\s\l\x\2\0\v\t\z\p\p\6\w\p\0\i\h\3\1\x\b\c\e\r\d\5\3\g\8\0\9\1\0\y\u\t\8\j\x\9\b\5\3\u\0\2\0\t\u\a\c\6\k\n\9\d\0\z\7\0\r\1\7\l\m\5\q\j\a\g\w\x\f\o\z\a\d\b\w\f\9\o\a\d\e\5\m\e\p\4\0\9\x\y\m\i\o\e\r\m\r\2\o\q\v\9\4\h\m\b\k\h\4\w\4\h\b\l\o\3\e\4\a\o\l\e\e\n\1\i\v\0\a\s\v\8\7\z\u\v\7\4\r\g\z\a\v\i\x\6\q\8\a\h\j\o\n\0\o\e\c\w\h\r\o\q\n\w\c\q\g\q\j\i\v\i\q\4\m\g\f\q\s\u\5\8\g\c\t\t\h\z\z\4\x\9\u\h\n\n\n\i\w\r\9\6\1\8\0\3\2\0\s\d\c\7\7\m\o\l\e\9\5\1\q\a\w\0\q\2\5\g\3\o\u\n\z\m\j\6\4\5\g\9\e\x\c\9\i\y\d\w\w\v\l\3\p\3\6\r\z\b\a\a\d\v\8\2\w\w\o\u\0\9\t\x\q\0\z\d\8\p\b\a\o\p\m\u\w\1\r\j\m\d\h\7\7\f\y\a\s\f\m\l\i\q\1\m\0\o\b\s\j\d\z\9\8\6\l\y\4\s\8\c\8\x\u\e\d\d\r\k\0\4\d\s\x\m ]] 00:07:44.853 18:25:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:44.854 18:25:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:44.854 [2024-12-08 18:25:02.687187] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:44.854 [2024-12-08 18:25:02.687293] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72629 ] 00:07:45.112 [2024-12-08 18:25:02.822863] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.112 [2024-12-08 18:25:02.880728] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.112 [2024-12-08 18:25:02.929291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.112  [2024-12-08T18:25:03.302Z] Copying: 512/512 [B] (average 500 kBps) 00:07:45.372 00:07:45.372 18:25:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ c3zytxc4nbt088lfahq888ykzcnuh7120ujp4rord7se25l0org99sy8ipxu1u0guzog6qtz7yxvryzef86fegcqargymgl3469ndn1j6wldtnvdlmrs233fbclx6rvc7mah5as13q7epqgywun36eyuhth6els12votj9zjkybmhh6slx20vtzpp6wp0ih31xbcerd53g80910yut8jx9b53u020tuac6kn9d0z70r17lm5qjagwxfozadbwf9oade5mep409xymioermr2oqv94hmbkh4w4hblo3e4aoleen1iv0asv87zuv74rgzavix6q8ahjon0oecwhroqnwcqgqjiviq4mgfqsu58gctthzz4x9uhnnniwr96180320sdc77mole951qaw0q25g3ounzmj645g9exc9iydwwvl3p36rzbaadv82wwou09txq0zd8pbaopmuw1rjmdh77fyasfmliq1m0obsjdz986ly4s8c8xueddrk04dsxm == \c\3\z\y\t\x\c\4\n\b\t\0\8\8\l\f\a\h\q\8\8\8\y\k\z\c\n\u\h\7\1\2\0\u\j\p\4\r\o\r\d\7\s\e\2\5\l\0\o\r\g\9\9\s\y\8\i\p\x\u\1\u\0\g\u\z\o\g\6\q\t\z\7\y\x\v\r\y\z\e\f\8\6\f\e\g\c\q\a\r\g\y\m\g\l\3\4\6\9\n\d\n\1\j\6\w\l\d\t\n\v\d\l\m\r\s\2\3\3\f\b\c\l\x\6\r\v\c\7\m\a\h\5\a\s\1\3\q\7\e\p\q\g\y\w\u\n\3\6\e\y\u\h\t\h\6\e\l\s\1\2\v\o\t\j\9\z\j\k\y\b\m\h\h\6\s\l\x\2\0\v\t\z\p\p\6\w\p\0\i\h\3\1\x\b\c\e\r\d\5\3\g\8\0\9\1\0\y\u\t\8\j\x\9\b\5\3\u\0\2\0\t\u\a\c\6\k\n\9\d\0\z\7\0\r\1\7\l\m\5\q\j\a\g\w\x\f\o\z\a\d\b\w\f\9\o\a\d\e\5\m\e\p\4\0\9\x\y\m\i\o\e\r\m\r\2\o\q\v\9\4\h\m\b\k\h\4\w\4\h\b\l\o\3\e\4\a\o\l\e\e\n\1\i\v\0\a\s\v\8\7\z\u\v\7\4\r\g\z\a\v\i\x\6\q\8\a\h\j\o\n\0\o\e\c\w\h\r\o\q\n\w\c\q\g\q\j\i\v\i\q\4\m\g\f\q\s\u\5\8\g\c\t\t\h\z\z\4\x\9\u\h\n\n\n\i\w\r\9\6\1\8\0\3\2\0\s\d\c\7\7\m\o\l\e\9\5\1\q\a\w\0\q\2\5\g\3\o\u\n\z\m\j\6\4\5\g\9\e\x\c\9\i\y\d\w\w\v\l\3\p\3\6\r\z\b\a\a\d\v\8\2\w\w\o\u\0\9\t\x\q\0\z\d\8\p\b\a\o\p\m\u\w\1\r\j\m\d\h\7\7\f\y\a\s\f\m\l\i\q\1\m\0\o\b\s\j\d\z\9\8\6\l\y\4\s\8\c\8\x\u\e\d\d\r\k\0\4\d\s\x\m ]] 00:07:45.372 18:25:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:45.372 18:25:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:45.372 18:25:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:45.372 18:25:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:45.372 18:25:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:45.372 18:25:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:45.372 [2024-12-08 18:25:03.232495] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:45.372 [2024-12-08 18:25:03.232601] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72642 ] 00:07:45.631 [2024-12-08 18:25:03.364461] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.631 [2024-12-08 18:25:03.420507] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.631 [2024-12-08 18:25:03.473740] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.631  [2024-12-08T18:25:03.821Z] Copying: 512/512 [B] (average 500 kBps) 00:07:45.891 00:07:45.891 18:25:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 61bi4qp172iszuocj0ng6gz7ffm73ppqq14txlvaloy4uk6t15a4mxupgarpbi0fhq4vd8uk70akage95wyqrbmc8gi3v9dxuwjgo18wsi8vv42j3c27raof0xkbopdmeccsqnhoeoxg1eu1xkvx62622h1csd5r4jquc4wfh1gtdeu8k516yn1nrt30ptkm61aj9dpnmnlpvhtqaxqolx4rtlfydokvwyeoigktwlkix9voz9gzn6axlygp5va6ytqwumnhzcnv4q0z7uy60ngphvwhvzo6jn956ifw9sjir1cn9xw988qp2hdm3418hbsvnb0blcs1dw6x1r4gill3t8o0im4so538jcze4x5g83ehezfdbdrhrhev0j311mtijc213vpgnuxb9095ijmglxq8ci0ewswy09vm5y6l3alkgd70z7yfhiyqhr2ltre4o355bslxaa5mhq58sfox7f6l5r7wlqrunh1j9wwi1gfslpw6zepcmhcrj636 == \6\1\b\i\4\q\p\1\7\2\i\s\z\u\o\c\j\0\n\g\6\g\z\7\f\f\m\7\3\p\p\q\q\1\4\t\x\l\v\a\l\o\y\4\u\k\6\t\1\5\a\4\m\x\u\p\g\a\r\p\b\i\0\f\h\q\4\v\d\8\u\k\7\0\a\k\a\g\e\9\5\w\y\q\r\b\m\c\8\g\i\3\v\9\d\x\u\w\j\g\o\1\8\w\s\i\8\v\v\4\2\j\3\c\2\7\r\a\o\f\0\x\k\b\o\p\d\m\e\c\c\s\q\n\h\o\e\o\x\g\1\e\u\1\x\k\v\x\6\2\6\2\2\h\1\c\s\d\5\r\4\j\q\u\c\4\w\f\h\1\g\t\d\e\u\8\k\5\1\6\y\n\1\n\r\t\3\0\p\t\k\m\6\1\a\j\9\d\p\n\m\n\l\p\v\h\t\q\a\x\q\o\l\x\4\r\t\l\f\y\d\o\k\v\w\y\e\o\i\g\k\t\w\l\k\i\x\9\v\o\z\9\g\z\n\6\a\x\l\y\g\p\5\v\a\6\y\t\q\w\u\m\n\h\z\c\n\v\4\q\0\z\7\u\y\6\0\n\g\p\h\v\w\h\v\z\o\6\j\n\9\5\6\i\f\w\9\s\j\i\r\1\c\n\9\x\w\9\8\8\q\p\2\h\d\m\3\4\1\8\h\b\s\v\n\b\0\b\l\c\s\1\d\w\6\x\1\r\4\g\i\l\l\3\t\8\o\0\i\m\4\s\o\5\3\8\j\c\z\e\4\x\5\g\8\3\e\h\e\z\f\d\b\d\r\h\r\h\e\v\0\j\3\1\1\m\t\i\j\c\2\1\3\v\p\g\n\u\x\b\9\0\9\5\i\j\m\g\l\x\q\8\c\i\0\e\w\s\w\y\0\9\v\m\5\y\6\l\3\a\l\k\g\d\7\0\z\7\y\f\h\i\y\q\h\r\2\l\t\r\e\4\o\3\5\5\b\s\l\x\a\a\5\m\h\q\5\8\s\f\o\x\7\f\6\l\5\r\7\w\l\q\r\u\n\h\1\j\9\w\w\i\1\g\f\s\l\p\w\6\z\e\p\c\m\h\c\r\j\6\3\6 ]] 00:07:45.891 18:25:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:45.891 18:25:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:45.891 [2024-12-08 18:25:03.743957] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:45.891 [2024-12-08 18:25:03.744062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72644 ] 00:07:46.150 [2024-12-08 18:25:03.873690] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.150 [2024-12-08 18:25:03.929617] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.150 [2024-12-08 18:25:03.978415] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.150  [2024-12-08T18:25:04.339Z] Copying: 512/512 [B] (average 500 kBps) 00:07:46.409 00:07:46.409 18:25:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 61bi4qp172iszuocj0ng6gz7ffm73ppqq14txlvaloy4uk6t15a4mxupgarpbi0fhq4vd8uk70akage95wyqrbmc8gi3v9dxuwjgo18wsi8vv42j3c27raof0xkbopdmeccsqnhoeoxg1eu1xkvx62622h1csd5r4jquc4wfh1gtdeu8k516yn1nrt30ptkm61aj9dpnmnlpvhtqaxqolx4rtlfydokvwyeoigktwlkix9voz9gzn6axlygp5va6ytqwumnhzcnv4q0z7uy60ngphvwhvzo6jn956ifw9sjir1cn9xw988qp2hdm3418hbsvnb0blcs1dw6x1r4gill3t8o0im4so538jcze4x5g83ehezfdbdrhrhev0j311mtijc213vpgnuxb9095ijmglxq8ci0ewswy09vm5y6l3alkgd70z7yfhiyqhr2ltre4o355bslxaa5mhq58sfox7f6l5r7wlqrunh1j9wwi1gfslpw6zepcmhcrj636 == \6\1\b\i\4\q\p\1\7\2\i\s\z\u\o\c\j\0\n\g\6\g\z\7\f\f\m\7\3\p\p\q\q\1\4\t\x\l\v\a\l\o\y\4\u\k\6\t\1\5\a\4\m\x\u\p\g\a\r\p\b\i\0\f\h\q\4\v\d\8\u\k\7\0\a\k\a\g\e\9\5\w\y\q\r\b\m\c\8\g\i\3\v\9\d\x\u\w\j\g\o\1\8\w\s\i\8\v\v\4\2\j\3\c\2\7\r\a\o\f\0\x\k\b\o\p\d\m\e\c\c\s\q\n\h\o\e\o\x\g\1\e\u\1\x\k\v\x\6\2\6\2\2\h\1\c\s\d\5\r\4\j\q\u\c\4\w\f\h\1\g\t\d\e\u\8\k\5\1\6\y\n\1\n\r\t\3\0\p\t\k\m\6\1\a\j\9\d\p\n\m\n\l\p\v\h\t\q\a\x\q\o\l\x\4\r\t\l\f\y\d\o\k\v\w\y\e\o\i\g\k\t\w\l\k\i\x\9\v\o\z\9\g\z\n\6\a\x\l\y\g\p\5\v\a\6\y\t\q\w\u\m\n\h\z\c\n\v\4\q\0\z\7\u\y\6\0\n\g\p\h\v\w\h\v\z\o\6\j\n\9\5\6\i\f\w\9\s\j\i\r\1\c\n\9\x\w\9\8\8\q\p\2\h\d\m\3\4\1\8\h\b\s\v\n\b\0\b\l\c\s\1\d\w\6\x\1\r\4\g\i\l\l\3\t\8\o\0\i\m\4\s\o\5\3\8\j\c\z\e\4\x\5\g\8\3\e\h\e\z\f\d\b\d\r\h\r\h\e\v\0\j\3\1\1\m\t\i\j\c\2\1\3\v\p\g\n\u\x\b\9\0\9\5\i\j\m\g\l\x\q\8\c\i\0\e\w\s\w\y\0\9\v\m\5\y\6\l\3\a\l\k\g\d\7\0\z\7\y\f\h\i\y\q\h\r\2\l\t\r\e\4\o\3\5\5\b\s\l\x\a\a\5\m\h\q\5\8\s\f\o\x\7\f\6\l\5\r\7\w\l\q\r\u\n\h\1\j\9\w\w\i\1\g\f\s\l\p\w\6\z\e\p\c\m\h\c\r\j\6\3\6 ]] 00:07:46.409 18:25:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:46.409 18:25:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:46.410 [2024-12-08 18:25:04.274547] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:46.410 [2024-12-08 18:25:04.274648] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72657 ] 00:07:46.669 [2024-12-08 18:25:04.402043] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.669 [2024-12-08 18:25:04.459854] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.669 [2024-12-08 18:25:04.512660] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.669  [2024-12-08T18:25:04.858Z] Copying: 512/512 [B] (average 500 kBps) 00:07:46.928 00:07:46.928 18:25:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 61bi4qp172iszuocj0ng6gz7ffm73ppqq14txlvaloy4uk6t15a4mxupgarpbi0fhq4vd8uk70akage95wyqrbmc8gi3v9dxuwjgo18wsi8vv42j3c27raof0xkbopdmeccsqnhoeoxg1eu1xkvx62622h1csd5r4jquc4wfh1gtdeu8k516yn1nrt30ptkm61aj9dpnmnlpvhtqaxqolx4rtlfydokvwyeoigktwlkix9voz9gzn6axlygp5va6ytqwumnhzcnv4q0z7uy60ngphvwhvzo6jn956ifw9sjir1cn9xw988qp2hdm3418hbsvnb0blcs1dw6x1r4gill3t8o0im4so538jcze4x5g83ehezfdbdrhrhev0j311mtijc213vpgnuxb9095ijmglxq8ci0ewswy09vm5y6l3alkgd70z7yfhiyqhr2ltre4o355bslxaa5mhq58sfox7f6l5r7wlqrunh1j9wwi1gfslpw6zepcmhcrj636 == \6\1\b\i\4\q\p\1\7\2\i\s\z\u\o\c\j\0\n\g\6\g\z\7\f\f\m\7\3\p\p\q\q\1\4\t\x\l\v\a\l\o\y\4\u\k\6\t\1\5\a\4\m\x\u\p\g\a\r\p\b\i\0\f\h\q\4\v\d\8\u\k\7\0\a\k\a\g\e\9\5\w\y\q\r\b\m\c\8\g\i\3\v\9\d\x\u\w\j\g\o\1\8\w\s\i\8\v\v\4\2\j\3\c\2\7\r\a\o\f\0\x\k\b\o\p\d\m\e\c\c\s\q\n\h\o\e\o\x\g\1\e\u\1\x\k\v\x\6\2\6\2\2\h\1\c\s\d\5\r\4\j\q\u\c\4\w\f\h\1\g\t\d\e\u\8\k\5\1\6\y\n\1\n\r\t\3\0\p\t\k\m\6\1\a\j\9\d\p\n\m\n\l\p\v\h\t\q\a\x\q\o\l\x\4\r\t\l\f\y\d\o\k\v\w\y\e\o\i\g\k\t\w\l\k\i\x\9\v\o\z\9\g\z\n\6\a\x\l\y\g\p\5\v\a\6\y\t\q\w\u\m\n\h\z\c\n\v\4\q\0\z\7\u\y\6\0\n\g\p\h\v\w\h\v\z\o\6\j\n\9\5\6\i\f\w\9\s\j\i\r\1\c\n\9\x\w\9\8\8\q\p\2\h\d\m\3\4\1\8\h\b\s\v\n\b\0\b\l\c\s\1\d\w\6\x\1\r\4\g\i\l\l\3\t\8\o\0\i\m\4\s\o\5\3\8\j\c\z\e\4\x\5\g\8\3\e\h\e\z\f\d\b\d\r\h\r\h\e\v\0\j\3\1\1\m\t\i\j\c\2\1\3\v\p\g\n\u\x\b\9\0\9\5\i\j\m\g\l\x\q\8\c\i\0\e\w\s\w\y\0\9\v\m\5\y\6\l\3\a\l\k\g\d\7\0\z\7\y\f\h\i\y\q\h\r\2\l\t\r\e\4\o\3\5\5\b\s\l\x\a\a\5\m\h\q\5\8\s\f\o\x\7\f\6\l\5\r\7\w\l\q\r\u\n\h\1\j\9\w\w\i\1\g\f\s\l\p\w\6\z\e\p\c\m\h\c\r\j\6\3\6 ]] 00:07:46.928 18:25:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:46.928 18:25:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:46.928 [2024-12-08 18:25:04.798991] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:46.928 [2024-12-08 18:25:04.799092] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72659 ] 00:07:47.187 [2024-12-08 18:25:04.935358] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.187 [2024-12-08 18:25:04.997599] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.187 [2024-12-08 18:25:05.047374] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.187  [2024-12-08T18:25:05.376Z] Copying: 512/512 [B] (average 250 kBps) 00:07:47.446 00:07:47.446 ************************************ 00:07:47.446 END TEST dd_flags_misc_forced_aio 00:07:47.446 ************************************ 00:07:47.446 18:25:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 61bi4qp172iszuocj0ng6gz7ffm73ppqq14txlvaloy4uk6t15a4mxupgarpbi0fhq4vd8uk70akage95wyqrbmc8gi3v9dxuwjgo18wsi8vv42j3c27raof0xkbopdmeccsqnhoeoxg1eu1xkvx62622h1csd5r4jquc4wfh1gtdeu8k516yn1nrt30ptkm61aj9dpnmnlpvhtqaxqolx4rtlfydokvwyeoigktwlkix9voz9gzn6axlygp5va6ytqwumnhzcnv4q0z7uy60ngphvwhvzo6jn956ifw9sjir1cn9xw988qp2hdm3418hbsvnb0blcs1dw6x1r4gill3t8o0im4so538jcze4x5g83ehezfdbdrhrhev0j311mtijc213vpgnuxb9095ijmglxq8ci0ewswy09vm5y6l3alkgd70z7yfhiyqhr2ltre4o355bslxaa5mhq58sfox7f6l5r7wlqrunh1j9wwi1gfslpw6zepcmhcrj636 == \6\1\b\i\4\q\p\1\7\2\i\s\z\u\o\c\j\0\n\g\6\g\z\7\f\f\m\7\3\p\p\q\q\1\4\t\x\l\v\a\l\o\y\4\u\k\6\t\1\5\a\4\m\x\u\p\g\a\r\p\b\i\0\f\h\q\4\v\d\8\u\k\7\0\a\k\a\g\e\9\5\w\y\q\r\b\m\c\8\g\i\3\v\9\d\x\u\w\j\g\o\1\8\w\s\i\8\v\v\4\2\j\3\c\2\7\r\a\o\f\0\x\k\b\o\p\d\m\e\c\c\s\q\n\h\o\e\o\x\g\1\e\u\1\x\k\v\x\6\2\6\2\2\h\1\c\s\d\5\r\4\j\q\u\c\4\w\f\h\1\g\t\d\e\u\8\k\5\1\6\y\n\1\n\r\t\3\0\p\t\k\m\6\1\a\j\9\d\p\n\m\n\l\p\v\h\t\q\a\x\q\o\l\x\4\r\t\l\f\y\d\o\k\v\w\y\e\o\i\g\k\t\w\l\k\i\x\9\v\o\z\9\g\z\n\6\a\x\l\y\g\p\5\v\a\6\y\t\q\w\u\m\n\h\z\c\n\v\4\q\0\z\7\u\y\6\0\n\g\p\h\v\w\h\v\z\o\6\j\n\9\5\6\i\f\w\9\s\j\i\r\1\c\n\9\x\w\9\8\8\q\p\2\h\d\m\3\4\1\8\h\b\s\v\n\b\0\b\l\c\s\1\d\w\6\x\1\r\4\g\i\l\l\3\t\8\o\0\i\m\4\s\o\5\3\8\j\c\z\e\4\x\5\g\8\3\e\h\e\z\f\d\b\d\r\h\r\h\e\v\0\j\3\1\1\m\t\i\j\c\2\1\3\v\p\g\n\u\x\b\9\0\9\5\i\j\m\g\l\x\q\8\c\i\0\e\w\s\w\y\0\9\v\m\5\y\6\l\3\a\l\k\g\d\7\0\z\7\y\f\h\i\y\q\h\r\2\l\t\r\e\4\o\3\5\5\b\s\l\x\a\a\5\m\h\q\5\8\s\f\o\x\7\f\6\l\5\r\7\w\l\q\r\u\n\h\1\j\9\w\w\i\1\g\f\s\l\p\w\6\z\e\p\c\m\h\c\r\j\6\3\6 ]] 00:07:47.446 00:07:47.446 real 0m4.252s 00:07:47.446 user 0m2.126s 00:07:47.446 sys 0m1.140s 00:07:47.446 18:25:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.446 18:25:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:47.446 18:25:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:47.446 18:25:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:47.446 18:25:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:47.446 ************************************ 00:07:47.446 END TEST spdk_dd_posix 00:07:47.446 ************************************ 00:07:47.446 00:07:47.446 real 0m19.369s 00:07:47.446 user 0m8.754s 00:07:47.446 sys 0m6.458s 00:07:47.446 18:25:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.446 18:25:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:47.705 18:25:05 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:47.705 18:25:05 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:47.705 18:25:05 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.705 18:25:05 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:47.705 ************************************ 00:07:47.705 START TEST spdk_dd_malloc 00:07:47.705 ************************************ 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:47.705 * Looking for test storage... 00:07:47.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lcov --version 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:47.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.705 --rc genhtml_branch_coverage=1 00:07:47.705 --rc genhtml_function_coverage=1 00:07:47.705 --rc genhtml_legend=1 00:07:47.705 --rc geninfo_all_blocks=1 00:07:47.705 --rc geninfo_unexecuted_blocks=1 00:07:47.705 00:07:47.705 ' 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:47.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.705 --rc genhtml_branch_coverage=1 00:07:47.705 --rc genhtml_function_coverage=1 00:07:47.705 --rc genhtml_legend=1 00:07:47.705 --rc geninfo_all_blocks=1 00:07:47.705 --rc geninfo_unexecuted_blocks=1 00:07:47.705 00:07:47.705 ' 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:47.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.705 --rc genhtml_branch_coverage=1 00:07:47.705 --rc genhtml_function_coverage=1 00:07:47.705 --rc genhtml_legend=1 00:07:47.705 --rc geninfo_all_blocks=1 00:07:47.705 --rc geninfo_unexecuted_blocks=1 00:07:47.705 00:07:47.705 ' 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:47.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.705 --rc genhtml_branch_coverage=1 00:07:47.705 --rc genhtml_function_coverage=1 00:07:47.705 --rc genhtml_legend=1 00:07:47.705 --rc geninfo_all_blocks=1 00:07:47.705 --rc geninfo_unexecuted_blocks=1 00:07:47.705 00:07:47.705 ' 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:47.705 ************************************ 00:07:47.705 START TEST dd_malloc_copy 00:07:47.705 ************************************ 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:47.705 18:25:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:47.964 [2024-12-08 18:25:05.647369] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:47.964 [2024-12-08 18:25:05.647640] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72741 ] 00:07:47.964 { 00:07:47.964 "subsystems": [ 00:07:47.964 { 00:07:47.964 "subsystem": "bdev", 00:07:47.964 "config": [ 00:07:47.964 { 00:07:47.964 "params": { 00:07:47.964 "block_size": 512, 00:07:47.964 "num_blocks": 1048576, 00:07:47.964 "name": "malloc0" 00:07:47.964 }, 00:07:47.964 "method": "bdev_malloc_create" 00:07:47.964 }, 00:07:47.964 { 00:07:47.964 "params": { 00:07:47.964 "block_size": 512, 00:07:47.964 "num_blocks": 1048576, 00:07:47.964 "name": "malloc1" 00:07:47.964 }, 00:07:47.964 "method": "bdev_malloc_create" 00:07:47.964 }, 00:07:47.964 { 00:07:47.964 "method": "bdev_wait_for_examine" 00:07:47.964 } 00:07:47.964 ] 00:07:47.964 } 00:07:47.964 ] 00:07:47.964 } 00:07:47.964 [2024-12-08 18:25:05.783463] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.964 [2024-12-08 18:25:05.836564] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.964 [2024-12-08 18:25:05.886383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.340  [2024-12-08T18:25:08.647Z] Copying: 245/512 [MB] (245 MBps) [2024-12-08T18:25:08.647Z] Copying: 491/512 [MB] (246 MBps) [2024-12-08T18:25:08.906Z] Copying: 512/512 [MB] (average 245 MBps) 00:07:50.976 00:07:50.976 18:25:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:50.976 18:25:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:50.976 18:25:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:50.976 18:25:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:50.976 [2024-12-08 18:25:08.892972] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:50.976 [2024-12-08 18:25:08.893768] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72783 ] 00:07:50.976 { 00:07:50.976 "subsystems": [ 00:07:50.976 { 00:07:50.976 "subsystem": "bdev", 00:07:50.977 "config": [ 00:07:50.977 { 00:07:50.977 "params": { 00:07:50.977 "block_size": 512, 00:07:50.977 "num_blocks": 1048576, 00:07:50.977 "name": "malloc0" 00:07:50.977 }, 00:07:50.977 "method": "bdev_malloc_create" 00:07:50.977 }, 00:07:50.977 { 00:07:50.977 "params": { 00:07:50.977 "block_size": 512, 00:07:50.977 "num_blocks": 1048576, 00:07:50.977 "name": "malloc1" 00:07:50.977 }, 00:07:50.977 "method": "bdev_malloc_create" 00:07:50.977 }, 00:07:50.977 { 00:07:50.977 "method": "bdev_wait_for_examine" 00:07:50.977 } 00:07:50.977 ] 00:07:50.977 } 00:07:50.977 ] 00:07:50.977 } 00:07:51.235 [2024-12-08 18:25:09.030644] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.235 [2024-12-08 18:25:09.081066] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.235 [2024-12-08 18:25:09.130341] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.613  [2024-12-08T18:25:11.478Z] Copying: 249/512 [MB] (249 MBps) [2024-12-08T18:25:11.736Z] Copying: 499/512 [MB] (249 MBps) [2024-12-08T18:25:12.303Z] Copying: 512/512 [MB] (average 248 MBps) 00:07:54.373 00:07:54.373 00:07:54.373 real 0m6.467s 00:07:54.373 user 0m5.503s 00:07:54.373 sys 0m0.811s 00:07:54.373 18:25:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.373 18:25:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:54.373 ************************************ 00:07:54.373 END TEST dd_malloc_copy 00:07:54.373 ************************************ 00:07:54.373 ************************************ 00:07:54.373 END TEST spdk_dd_malloc 00:07:54.373 ************************************ 00:07:54.373 00:07:54.373 real 0m6.710s 00:07:54.373 user 0m5.631s 00:07:54.373 sys 0m0.926s 00:07:54.373 18:25:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.373 18:25:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:54.373 18:25:12 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:54.373 18:25:12 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:54.373 18:25:12 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.373 18:25:12 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:54.373 ************************************ 00:07:54.373 START TEST spdk_dd_bdev_to_bdev 00:07:54.373 ************************************ 00:07:54.373 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:54.373 * Looking for test storage... 00:07:54.373 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:54.373 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:54.373 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lcov --version 00:07:54.373 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:54.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.632 --rc genhtml_branch_coverage=1 00:07:54.632 --rc genhtml_function_coverage=1 00:07:54.632 --rc genhtml_legend=1 00:07:54.632 --rc geninfo_all_blocks=1 00:07:54.632 --rc geninfo_unexecuted_blocks=1 00:07:54.632 00:07:54.632 ' 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:54.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.632 --rc genhtml_branch_coverage=1 00:07:54.632 --rc genhtml_function_coverage=1 00:07:54.632 --rc genhtml_legend=1 00:07:54.632 --rc geninfo_all_blocks=1 00:07:54.632 --rc geninfo_unexecuted_blocks=1 00:07:54.632 00:07:54.632 ' 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:54.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.632 --rc genhtml_branch_coverage=1 00:07:54.632 --rc genhtml_function_coverage=1 00:07:54.632 --rc genhtml_legend=1 00:07:54.632 --rc geninfo_all_blocks=1 00:07:54.632 --rc geninfo_unexecuted_blocks=1 00:07:54.632 00:07:54.632 ' 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:54.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.632 --rc genhtml_branch_coverage=1 00:07:54.632 --rc genhtml_function_coverage=1 00:07:54.632 --rc genhtml_legend=1 00:07:54.632 --rc geninfo_all_blocks=1 00:07:54.632 --rc geninfo_unexecuted_blocks=1 00:07:54.632 00:07:54.632 ' 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:54.632 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:54.633 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:54.633 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:54.633 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:54.633 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:54.633 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:54.633 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:54.633 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:54.633 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:54.633 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.633 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:54.633 ************************************ 00:07:54.633 START TEST dd_inflate_file 00:07:54.633 ************************************ 00:07:54.633 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:54.633 [2024-12-08 18:25:12.402902] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:54.633 [2024-12-08 18:25:12.403175] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72896 ] 00:07:54.633 [2024-12-08 18:25:12.540397] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.892 [2024-12-08 18:25:12.598498] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.892 [2024-12-08 18:25:12.647156] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.892  [2024-12-08T18:25:13.080Z] Copying: 64/64 [MB] (average 1523 MBps) 00:07:55.150 00:07:55.150 00:07:55.150 real 0m0.573s 00:07:55.150 user 0m0.323s 00:07:55.150 sys 0m0.301s 00:07:55.150 ************************************ 00:07:55.150 END TEST dd_inflate_file 00:07:55.150 ************************************ 00:07:55.150 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.150 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:55.150 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:55.151 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:55.151 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:55.151 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:55.151 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:55.151 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:55.151 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.151 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:55.151 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:55.151 ************************************ 00:07:55.151 START TEST dd_copy_to_out_bdev 00:07:55.151 ************************************ 00:07:55.151 18:25:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:55.151 { 00:07:55.151 "subsystems": [ 00:07:55.151 { 00:07:55.151 "subsystem": "bdev", 00:07:55.151 "config": [ 00:07:55.151 { 00:07:55.151 "params": { 00:07:55.151 "trtype": "pcie", 00:07:55.151 "traddr": "0000:00:10.0", 00:07:55.151 "name": "Nvme0" 00:07:55.151 }, 00:07:55.151 "method": "bdev_nvme_attach_controller" 00:07:55.151 }, 00:07:55.151 { 00:07:55.151 "params": { 00:07:55.151 "trtype": "pcie", 00:07:55.151 "traddr": "0000:00:11.0", 00:07:55.151 "name": "Nvme1" 00:07:55.151 }, 00:07:55.151 "method": "bdev_nvme_attach_controller" 00:07:55.151 }, 00:07:55.151 { 00:07:55.151 "method": "bdev_wait_for_examine" 00:07:55.151 } 00:07:55.151 ] 00:07:55.151 } 00:07:55.151 ] 00:07:55.151 } 00:07:55.151 [2024-12-08 18:25:13.028151] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:55.151 [2024-12-08 18:25:13.028392] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72929 ] 00:07:55.415 [2024-12-08 18:25:13.166996] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.415 [2024-12-08 18:25:13.223007] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.415 [2024-12-08 18:25:13.273332] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.807  [2024-12-08T18:25:14.737Z] Copying: 51/64 [MB] (51 MBps) [2024-12-08T18:25:14.997Z] Copying: 64/64 [MB] (average 51 MBps) 00:07:57.067 00:07:57.067 ************************************ 00:07:57.067 END TEST dd_copy_to_out_bdev 00:07:57.067 ************************************ 00:07:57.067 00:07:57.067 real 0m1.943s 00:07:57.067 user 0m1.703s 00:07:57.067 sys 0m1.593s 00:07:57.067 18:25:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.067 18:25:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:57.067 18:25:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:57.067 18:25:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:57.067 18:25:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:57.067 18:25:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.067 18:25:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:57.067 ************************************ 00:07:57.067 START TEST dd_offset_magic 00:07:57.067 ************************************ 00:07:57.067 18:25:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:07:57.067 18:25:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:57.067 18:25:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:57.067 18:25:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:57.067 18:25:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:57.067 18:25:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:57.067 18:25:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:57.067 18:25:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:57.067 18:25:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:57.324 [2024-12-08 18:25:15.024145] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:57.324 [2024-12-08 18:25:15.024235] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72974 ] 00:07:57.324 { 00:07:57.324 "subsystems": [ 00:07:57.324 { 00:07:57.324 "subsystem": "bdev", 00:07:57.324 "config": [ 00:07:57.324 { 00:07:57.324 "params": { 00:07:57.324 "trtype": "pcie", 00:07:57.324 "traddr": "0000:00:10.0", 00:07:57.325 "name": "Nvme0" 00:07:57.325 }, 00:07:57.325 "method": "bdev_nvme_attach_controller" 00:07:57.325 }, 00:07:57.325 { 00:07:57.325 "params": { 00:07:57.325 "trtype": "pcie", 00:07:57.325 "traddr": "0000:00:11.0", 00:07:57.325 "name": "Nvme1" 00:07:57.325 }, 00:07:57.325 "method": "bdev_nvme_attach_controller" 00:07:57.325 }, 00:07:57.325 { 00:07:57.325 "method": "bdev_wait_for_examine" 00:07:57.325 } 00:07:57.325 ] 00:07:57.325 } 00:07:57.325 ] 00:07:57.325 } 00:07:57.325 [2024-12-08 18:25:15.161798] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.325 [2024-12-08 18:25:15.223134] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.582 [2024-12-08 18:25:15.274998] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.840  [2024-12-08T18:25:15.770Z] Copying: 65/65 [MB] (average 866 MBps) 00:07:57.840 00:07:58.098 18:25:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:58.098 18:25:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:58.098 18:25:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:58.098 18:25:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:58.098 [2024-12-08 18:25:15.833361] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:58.098 [2024-12-08 18:25:15.833495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72994 ] 00:07:58.098 { 00:07:58.098 "subsystems": [ 00:07:58.098 { 00:07:58.098 "subsystem": "bdev", 00:07:58.098 "config": [ 00:07:58.098 { 00:07:58.098 "params": { 00:07:58.098 "trtype": "pcie", 00:07:58.098 "traddr": "0000:00:10.0", 00:07:58.098 "name": "Nvme0" 00:07:58.098 }, 00:07:58.098 "method": "bdev_nvme_attach_controller" 00:07:58.098 }, 00:07:58.098 { 00:07:58.098 "params": { 00:07:58.098 "trtype": "pcie", 00:07:58.098 "traddr": "0000:00:11.0", 00:07:58.098 "name": "Nvme1" 00:07:58.098 }, 00:07:58.098 "method": "bdev_nvme_attach_controller" 00:07:58.098 }, 00:07:58.098 { 00:07:58.098 "method": "bdev_wait_for_examine" 00:07:58.098 } 00:07:58.098 ] 00:07:58.098 } 00:07:58.098 ] 00:07:58.098 } 00:07:58.098 [2024-12-08 18:25:15.971064] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.355 [2024-12-08 18:25:16.036899] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.355 [2024-12-08 18:25:16.091951] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.355  [2024-12-08T18:25:16.544Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:58.614 00:07:58.614 18:25:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:58.614 18:25:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:58.614 18:25:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:58.614 18:25:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:58.614 18:25:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:58.614 18:25:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:58.614 18:25:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:58.614 [2024-12-08 18:25:16.502471] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:58.614 [2024-12-08 18:25:16.502580] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73011 ] 00:07:58.614 { 00:07:58.614 "subsystems": [ 00:07:58.614 { 00:07:58.614 "subsystem": "bdev", 00:07:58.614 "config": [ 00:07:58.614 { 00:07:58.614 "params": { 00:07:58.614 "trtype": "pcie", 00:07:58.614 "traddr": "0000:00:10.0", 00:07:58.614 "name": "Nvme0" 00:07:58.614 }, 00:07:58.614 "method": "bdev_nvme_attach_controller" 00:07:58.614 }, 00:07:58.614 { 00:07:58.614 "params": { 00:07:58.614 "trtype": "pcie", 00:07:58.614 "traddr": "0000:00:11.0", 00:07:58.614 "name": "Nvme1" 00:07:58.614 }, 00:07:58.614 "method": "bdev_nvme_attach_controller" 00:07:58.614 }, 00:07:58.614 { 00:07:58.614 "method": "bdev_wait_for_examine" 00:07:58.614 } 00:07:58.614 ] 00:07:58.614 } 00:07:58.614 ] 00:07:58.614 } 00:07:58.872 [2024-12-08 18:25:16.639486] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.872 [2024-12-08 18:25:16.696051] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.872 [2024-12-08 18:25:16.747137] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.131  [2024-12-08T18:25:17.320Z] Copying: 65/65 [MB] (average 984 MBps) 00:07:59.390 00:07:59.390 18:25:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:59.390 18:25:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:59.390 18:25:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:59.390 18:25:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:59.390 [2024-12-08 18:25:17.280326] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:59.390 [2024-12-08 18:25:17.280440] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73031 ] 00:07:59.390 { 00:07:59.390 "subsystems": [ 00:07:59.390 { 00:07:59.390 "subsystem": "bdev", 00:07:59.390 "config": [ 00:07:59.390 { 00:07:59.390 "params": { 00:07:59.390 "trtype": "pcie", 00:07:59.390 "traddr": "0000:00:10.0", 00:07:59.390 "name": "Nvme0" 00:07:59.390 }, 00:07:59.390 "method": "bdev_nvme_attach_controller" 00:07:59.390 }, 00:07:59.390 { 00:07:59.390 "params": { 00:07:59.390 "trtype": "pcie", 00:07:59.390 "traddr": "0000:00:11.0", 00:07:59.390 "name": "Nvme1" 00:07:59.390 }, 00:07:59.390 "method": "bdev_nvme_attach_controller" 00:07:59.390 }, 00:07:59.390 { 00:07:59.390 "method": "bdev_wait_for_examine" 00:07:59.390 } 00:07:59.390 ] 00:07:59.390 } 00:07:59.390 ] 00:07:59.390 } 00:07:59.649 [2024-12-08 18:25:17.408898] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.649 [2024-12-08 18:25:17.475166] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.649 [2024-12-08 18:25:17.526274] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.909  [2024-12-08T18:25:18.098Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:00.168 00:08:00.168 18:25:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:00.168 18:25:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:00.168 00:08:00.168 real 0m2.915s 00:08:00.168 user 0m2.087s 00:08:00.168 sys 0m0.901s 00:08:00.168 18:25:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.168 ************************************ 00:08:00.168 END TEST dd_offset_magic 00:08:00.168 ************************************ 00:08:00.168 18:25:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:00.168 18:25:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:00.168 18:25:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:00.168 18:25:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:00.168 18:25:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:00.168 18:25:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:00.168 18:25:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:00.168 18:25:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:00.168 18:25:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:00.168 18:25:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:00.168 18:25:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:00.168 18:25:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:00.168 [2024-12-08 18:25:17.984636] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:00.168 [2024-12-08 18:25:17.984737] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73062 ] 00:08:00.168 { 00:08:00.168 "subsystems": [ 00:08:00.168 { 00:08:00.168 "subsystem": "bdev", 00:08:00.168 "config": [ 00:08:00.168 { 00:08:00.168 "params": { 00:08:00.168 "trtype": "pcie", 00:08:00.168 "traddr": "0000:00:10.0", 00:08:00.168 "name": "Nvme0" 00:08:00.168 }, 00:08:00.168 "method": "bdev_nvme_attach_controller" 00:08:00.168 }, 00:08:00.168 { 00:08:00.168 "params": { 00:08:00.168 "trtype": "pcie", 00:08:00.168 "traddr": "0000:00:11.0", 00:08:00.168 "name": "Nvme1" 00:08:00.168 }, 00:08:00.168 "method": "bdev_nvme_attach_controller" 00:08:00.168 }, 00:08:00.168 { 00:08:00.168 "method": "bdev_wait_for_examine" 00:08:00.168 } 00:08:00.168 ] 00:08:00.168 } 00:08:00.168 ] 00:08:00.168 } 00:08:00.427 [2024-12-08 18:25:18.119680] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.427 [2024-12-08 18:25:18.191040] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.427 [2024-12-08 18:25:18.242021] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.687  [2024-12-08T18:25:18.617Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:08:00.687 00:08:00.687 18:25:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:00.687 18:25:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:00.687 18:25:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:00.687 18:25:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:00.687 18:25:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:00.687 18:25:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:00.687 18:25:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:00.687 18:25:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:00.687 18:25:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:00.687 18:25:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:00.946 [2024-12-08 18:25:18.654664] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:00.946 [2024-12-08 18:25:18.654764] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73078 ] 00:08:00.946 { 00:08:00.946 "subsystems": [ 00:08:00.946 { 00:08:00.946 "subsystem": "bdev", 00:08:00.946 "config": [ 00:08:00.946 { 00:08:00.946 "params": { 00:08:00.946 "trtype": "pcie", 00:08:00.946 "traddr": "0000:00:10.0", 00:08:00.946 "name": "Nvme0" 00:08:00.946 }, 00:08:00.946 "method": "bdev_nvme_attach_controller" 00:08:00.946 }, 00:08:00.946 { 00:08:00.946 "params": { 00:08:00.946 "trtype": "pcie", 00:08:00.946 "traddr": "0000:00:11.0", 00:08:00.946 "name": "Nvme1" 00:08:00.946 }, 00:08:00.946 "method": "bdev_nvme_attach_controller" 00:08:00.946 }, 00:08:00.946 { 00:08:00.946 "method": "bdev_wait_for_examine" 00:08:00.946 } 00:08:00.946 ] 00:08:00.946 } 00:08:00.946 ] 00:08:00.946 } 00:08:00.946 [2024-12-08 18:25:18.792117] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.946 [2024-12-08 18:25:18.851222] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.205 [2024-12-08 18:25:18.904831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.205  [2024-12-08T18:25:19.394Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:08:01.465 00:08:01.465 18:25:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:01.465 ************************************ 00:08:01.465 END TEST spdk_dd_bdev_to_bdev 00:08:01.465 ************************************ 00:08:01.465 00:08:01.465 real 0m7.244s 00:08:01.465 user 0m5.292s 00:08:01.465 sys 0m3.529s 00:08:01.465 18:25:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.465 18:25:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:01.724 18:25:19 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:01.724 18:25:19 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:01.724 18:25:19 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:01.724 18:25:19 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.724 18:25:19 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:01.724 ************************************ 00:08:01.724 START TEST spdk_dd_uring 00:08:01.724 ************************************ 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:01.724 * Looking for test storage... 00:08:01.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lcov --version 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:01.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.724 --rc genhtml_branch_coverage=1 00:08:01.724 --rc genhtml_function_coverage=1 00:08:01.724 --rc genhtml_legend=1 00:08:01.724 --rc geninfo_all_blocks=1 00:08:01.724 --rc geninfo_unexecuted_blocks=1 00:08:01.724 00:08:01.724 ' 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:01.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.724 --rc genhtml_branch_coverage=1 00:08:01.724 --rc genhtml_function_coverage=1 00:08:01.724 --rc genhtml_legend=1 00:08:01.724 --rc geninfo_all_blocks=1 00:08:01.724 --rc geninfo_unexecuted_blocks=1 00:08:01.724 00:08:01.724 ' 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:01.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.724 --rc genhtml_branch_coverage=1 00:08:01.724 --rc genhtml_function_coverage=1 00:08:01.724 --rc genhtml_legend=1 00:08:01.724 --rc geninfo_all_blocks=1 00:08:01.724 --rc geninfo_unexecuted_blocks=1 00:08:01.724 00:08:01.724 ' 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:01.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.724 --rc genhtml_branch_coverage=1 00:08:01.724 --rc genhtml_function_coverage=1 00:08:01.724 --rc genhtml_legend=1 00:08:01.724 --rc geninfo_all_blocks=1 00:08:01.724 --rc geninfo_unexecuted_blocks=1 00:08:01.724 00:08:01.724 ' 00:08:01.724 18:25:19 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:01.725 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:01.984 ************************************ 00:08:01.984 START TEST dd_uring_copy 00:08:01.984 ************************************ 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:08:01.984 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:01.985 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=xn6hdv4o6tksmdri82eiklz9rv6xyxvrqt4uhkt9b6ru2cf8a2y4pg14h6frm2qun8fdgkuy38xuc1q4rlqxxddpjsek3oycvf0eobt84iw0q57bcz7xpb1xzmg58n800241n8vr1zc53hfv8sstrbm7jpkxkiyma1zh3qkzw67da5ptd9mkalh8n9vl27v0br2dpkq9wyow7ex5zcxwq1b5nx0um40e7751i4yhnoc7kiphbpbn4nxdd58bqvxx2v1ksxns311jbls92chvd1yff1j4dtwpegjs7yc0hm9sqr7ejrzfn1mpikibth4zkrsxwemidntsdgu0zvp0j6j1491j0uk2pcqpebucf0mznz2icc2h4tg433b7l9gaf9i5cxe4krubekx73pafwlx63jp4l7w2jvr7i8tzzsc8o4lk57bgb38pe76kee3oan9jdhgmrqeobzzv5syaz1jt84owgahvz340nqs3rvajjhyfs8xag0rzbgj6iw8epn1qmzjh0mvj3prt9a4of8ra2uy9xacmuhnnd3a27idhh0c342izikuzw55yitm0pv7d20uipqfb9poa293iq8255z05b4pw65jvwsv70apmipwg3o48fkd773mn6shit633tjmldj67nyxb5zsdsmtw4ldzzqeft0019jydk8rjuk8dswnakv1hvhu266pafs1vka71bwp1hyuhk6kdipxmklt9omjiwe038j4vou3x079isfc63jfr2350qtmuxdsoh127qhr12fa1qveqa3d1ano0d3zxg52358v201sruloyxloqerng4zhpinzid5j74505onwvna6v8gnmahjgxy1s26v0gks1eerqn7iou7tnd9kx1uom2em1waave15vr82kou6w6nckih8tn7i6dur4tgs961txdqsnk1j5j1vlttgzjdy1goesnv7iqbr6wfsh0gst6c929xql2ytt75qioukgu72xkhrauty0c53p7fp0pxlwuab2t4ev 00:08:01.985 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo xn6hdv4o6tksmdri82eiklz9rv6xyxvrqt4uhkt9b6ru2cf8a2y4pg14h6frm2qun8fdgkuy38xuc1q4rlqxxddpjsek3oycvf0eobt84iw0q57bcz7xpb1xzmg58n800241n8vr1zc53hfv8sstrbm7jpkxkiyma1zh3qkzw67da5ptd9mkalh8n9vl27v0br2dpkq9wyow7ex5zcxwq1b5nx0um40e7751i4yhnoc7kiphbpbn4nxdd58bqvxx2v1ksxns311jbls92chvd1yff1j4dtwpegjs7yc0hm9sqr7ejrzfn1mpikibth4zkrsxwemidntsdgu0zvp0j6j1491j0uk2pcqpebucf0mznz2icc2h4tg433b7l9gaf9i5cxe4krubekx73pafwlx63jp4l7w2jvr7i8tzzsc8o4lk57bgb38pe76kee3oan9jdhgmrqeobzzv5syaz1jt84owgahvz340nqs3rvajjhyfs8xag0rzbgj6iw8epn1qmzjh0mvj3prt9a4of8ra2uy9xacmuhnnd3a27idhh0c342izikuzw55yitm0pv7d20uipqfb9poa293iq8255z05b4pw65jvwsv70apmipwg3o48fkd773mn6shit633tjmldj67nyxb5zsdsmtw4ldzzqeft0019jydk8rjuk8dswnakv1hvhu266pafs1vka71bwp1hyuhk6kdipxmklt9omjiwe038j4vou3x079isfc63jfr2350qtmuxdsoh127qhr12fa1qveqa3d1ano0d3zxg52358v201sruloyxloqerng4zhpinzid5j74505onwvna6v8gnmahjgxy1s26v0gks1eerqn7iou7tnd9kx1uom2em1waave15vr82kou6w6nckih8tn7i6dur4tgs961txdqsnk1j5j1vlttgzjdy1goesnv7iqbr6wfsh0gst6c929xql2ytt75qioukgu72xkhrauty0c53p7fp0pxlwuab2t4ev 00:08:01.985 18:25:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:01.985 [2024-12-08 18:25:19.761317] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:01.985 [2024-12-08 18:25:19.761467] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73156 ] 00:08:01.985 [2024-12-08 18:25:19.901027] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.244 [2024-12-08 18:25:19.981397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.244 [2024-12-08 18:25:20.056994] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.181  [2024-12-08T18:25:21.679Z] Copying: 511/511 [MB] (average 988 MBps) 00:08:03.749 00:08:03.749 18:25:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:03.749 18:25:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:08:03.749 18:25:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:03.749 18:25:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:03.749 { 00:08:03.749 "subsystems": [ 00:08:03.749 { 00:08:03.749 "subsystem": "bdev", 00:08:03.749 "config": [ 00:08:03.749 { 00:08:03.749 "params": { 00:08:03.749 "block_size": 512, 00:08:03.749 "num_blocks": 1048576, 00:08:03.749 "name": "malloc0" 00:08:03.749 }, 00:08:03.749 "method": "bdev_malloc_create" 00:08:03.749 }, 00:08:03.749 { 00:08:03.749 "params": { 00:08:03.749 "filename": "/dev/zram1", 00:08:03.749 "name": "uring0" 00:08:03.749 }, 00:08:03.749 "method": "bdev_uring_create" 00:08:03.749 }, 00:08:03.749 { 00:08:03.749 "method": "bdev_wait_for_examine" 00:08:03.749 } 00:08:03.749 ] 00:08:03.749 } 00:08:03.749 ] 00:08:03.749 } 00:08:03.749 [2024-12-08 18:25:21.435537] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:03.749 [2024-12-08 18:25:21.435655] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73183 ] 00:08:03.749 [2024-12-08 18:25:21.572926] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.749 [2024-12-08 18:25:21.630423] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.008 [2024-12-08 18:25:21.700875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.385  [2024-12-08T18:25:24.253Z] Copying: 241/512 [MB] (241 MBps) [2024-12-08T18:25:24.253Z] Copying: 493/512 [MB] (252 MBps) [2024-12-08T18:25:24.512Z] Copying: 512/512 [MB] (average 247 MBps) 00:08:06.582 00:08:06.582 18:25:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:08:06.582 18:25:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:06.582 18:25:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:06.582 18:25:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:06.582 [2024-12-08 18:25:24.417499] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:06.582 [2024-12-08 18:25:24.417591] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73227 ] 00:08:06.582 { 00:08:06.582 "subsystems": [ 00:08:06.582 { 00:08:06.582 "subsystem": "bdev", 00:08:06.582 "config": [ 00:08:06.582 { 00:08:06.582 "params": { 00:08:06.582 "block_size": 512, 00:08:06.582 "num_blocks": 1048576, 00:08:06.582 "name": "malloc0" 00:08:06.582 }, 00:08:06.583 "method": "bdev_malloc_create" 00:08:06.583 }, 00:08:06.583 { 00:08:06.583 "params": { 00:08:06.583 "filename": "/dev/zram1", 00:08:06.583 "name": "uring0" 00:08:06.583 }, 00:08:06.583 "method": "bdev_uring_create" 00:08:06.583 }, 00:08:06.583 { 00:08:06.583 "method": "bdev_wait_for_examine" 00:08:06.583 } 00:08:06.583 ] 00:08:06.583 } 00:08:06.583 ] 00:08:06.583 } 00:08:06.841 [2024-12-08 18:25:24.547902] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.841 [2024-12-08 18:25:24.606465] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.841 [2024-12-08 18:25:24.656680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.216  [2024-12-08T18:25:27.081Z] Copying: 169/512 [MB] (169 MBps) [2024-12-08T18:25:28.013Z] Copying: 363/512 [MB] (193 MBps) [2024-12-08T18:25:28.271Z] Copying: 512/512 [MB] (average 179 MBps) 00:08:10.341 00:08:10.341 18:25:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:10.341 18:25:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ xn6hdv4o6tksmdri82eiklz9rv6xyxvrqt4uhkt9b6ru2cf8a2y4pg14h6frm2qun8fdgkuy38xuc1q4rlqxxddpjsek3oycvf0eobt84iw0q57bcz7xpb1xzmg58n800241n8vr1zc53hfv8sstrbm7jpkxkiyma1zh3qkzw67da5ptd9mkalh8n9vl27v0br2dpkq9wyow7ex5zcxwq1b5nx0um40e7751i4yhnoc7kiphbpbn4nxdd58bqvxx2v1ksxns311jbls92chvd1yff1j4dtwpegjs7yc0hm9sqr7ejrzfn1mpikibth4zkrsxwemidntsdgu0zvp0j6j1491j0uk2pcqpebucf0mznz2icc2h4tg433b7l9gaf9i5cxe4krubekx73pafwlx63jp4l7w2jvr7i8tzzsc8o4lk57bgb38pe76kee3oan9jdhgmrqeobzzv5syaz1jt84owgahvz340nqs3rvajjhyfs8xag0rzbgj6iw8epn1qmzjh0mvj3prt9a4of8ra2uy9xacmuhnnd3a27idhh0c342izikuzw55yitm0pv7d20uipqfb9poa293iq8255z05b4pw65jvwsv70apmipwg3o48fkd773mn6shit633tjmldj67nyxb5zsdsmtw4ldzzqeft0019jydk8rjuk8dswnakv1hvhu266pafs1vka71bwp1hyuhk6kdipxmklt9omjiwe038j4vou3x079isfc63jfr2350qtmuxdsoh127qhr12fa1qveqa3d1ano0d3zxg52358v201sruloyxloqerng4zhpinzid5j74505onwvna6v8gnmahjgxy1s26v0gks1eerqn7iou7tnd9kx1uom2em1waave15vr82kou6w6nckih8tn7i6dur4tgs961txdqsnk1j5j1vlttgzjdy1goesnv7iqbr6wfsh0gst6c929xql2ytt75qioukgu72xkhrauty0c53p7fp0pxlwuab2t4ev == \x\n\6\h\d\v\4\o\6\t\k\s\m\d\r\i\8\2\e\i\k\l\z\9\r\v\6\x\y\x\v\r\q\t\4\u\h\k\t\9\b\6\r\u\2\c\f\8\a\2\y\4\p\g\1\4\h\6\f\r\m\2\q\u\n\8\f\d\g\k\u\y\3\8\x\u\c\1\q\4\r\l\q\x\x\d\d\p\j\s\e\k\3\o\y\c\v\f\0\e\o\b\t\8\4\i\w\0\q\5\7\b\c\z\7\x\p\b\1\x\z\m\g\5\8\n\8\0\0\2\4\1\n\8\v\r\1\z\c\5\3\h\f\v\8\s\s\t\r\b\m\7\j\p\k\x\k\i\y\m\a\1\z\h\3\q\k\z\w\6\7\d\a\5\p\t\d\9\m\k\a\l\h\8\n\9\v\l\2\7\v\0\b\r\2\d\p\k\q\9\w\y\o\w\7\e\x\5\z\c\x\w\q\1\b\5\n\x\0\u\m\4\0\e\7\7\5\1\i\4\y\h\n\o\c\7\k\i\p\h\b\p\b\n\4\n\x\d\d\5\8\b\q\v\x\x\2\v\1\k\s\x\n\s\3\1\1\j\b\l\s\9\2\c\h\v\d\1\y\f\f\1\j\4\d\t\w\p\e\g\j\s\7\y\c\0\h\m\9\s\q\r\7\e\j\r\z\f\n\1\m\p\i\k\i\b\t\h\4\z\k\r\s\x\w\e\m\i\d\n\t\s\d\g\u\0\z\v\p\0\j\6\j\1\4\9\1\j\0\u\k\2\p\c\q\p\e\b\u\c\f\0\m\z\n\z\2\i\c\c\2\h\4\t\g\4\3\3\b\7\l\9\g\a\f\9\i\5\c\x\e\4\k\r\u\b\e\k\x\7\3\p\a\f\w\l\x\6\3\j\p\4\l\7\w\2\j\v\r\7\i\8\t\z\z\s\c\8\o\4\l\k\5\7\b\g\b\3\8\p\e\7\6\k\e\e\3\o\a\n\9\j\d\h\g\m\r\q\e\o\b\z\z\v\5\s\y\a\z\1\j\t\8\4\o\w\g\a\h\v\z\3\4\0\n\q\s\3\r\v\a\j\j\h\y\f\s\8\x\a\g\0\r\z\b\g\j\6\i\w\8\e\p\n\1\q\m\z\j\h\0\m\v\j\3\p\r\t\9\a\4\o\f\8\r\a\2\u\y\9\x\a\c\m\u\h\n\n\d\3\a\2\7\i\d\h\h\0\c\3\4\2\i\z\i\k\u\z\w\5\5\y\i\t\m\0\p\v\7\d\2\0\u\i\p\q\f\b\9\p\o\a\2\9\3\i\q\8\2\5\5\z\0\5\b\4\p\w\6\5\j\v\w\s\v\7\0\a\p\m\i\p\w\g\3\o\4\8\f\k\d\7\7\3\m\n\6\s\h\i\t\6\3\3\t\j\m\l\d\j\6\7\n\y\x\b\5\z\s\d\s\m\t\w\4\l\d\z\z\q\e\f\t\0\0\1\9\j\y\d\k\8\r\j\u\k\8\d\s\w\n\a\k\v\1\h\v\h\u\2\6\6\p\a\f\s\1\v\k\a\7\1\b\w\p\1\h\y\u\h\k\6\k\d\i\p\x\m\k\l\t\9\o\m\j\i\w\e\0\3\8\j\4\v\o\u\3\x\0\7\9\i\s\f\c\6\3\j\f\r\2\3\5\0\q\t\m\u\x\d\s\o\h\1\2\7\q\h\r\1\2\f\a\1\q\v\e\q\a\3\d\1\a\n\o\0\d\3\z\x\g\5\2\3\5\8\v\2\0\1\s\r\u\l\o\y\x\l\o\q\e\r\n\g\4\z\h\p\i\n\z\i\d\5\j\7\4\5\0\5\o\n\w\v\n\a\6\v\8\g\n\m\a\h\j\g\x\y\1\s\2\6\v\0\g\k\s\1\e\e\r\q\n\7\i\o\u\7\t\n\d\9\k\x\1\u\o\m\2\e\m\1\w\a\a\v\e\1\5\v\r\8\2\k\o\u\6\w\6\n\c\k\i\h\8\t\n\7\i\6\d\u\r\4\t\g\s\9\6\1\t\x\d\q\s\n\k\1\j\5\j\1\v\l\t\t\g\z\j\d\y\1\g\o\e\s\n\v\7\i\q\b\r\6\w\f\s\h\0\g\s\t\6\c\9\2\9\x\q\l\2\y\t\t\7\5\q\i\o\u\k\g\u\7\2\x\k\h\r\a\u\t\y\0\c\5\3\p\7\f\p\0\p\x\l\w\u\a\b\2\t\4\e\v ]] 00:08:10.341 18:25:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:10.341 18:25:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ xn6hdv4o6tksmdri82eiklz9rv6xyxvrqt4uhkt9b6ru2cf8a2y4pg14h6frm2qun8fdgkuy38xuc1q4rlqxxddpjsek3oycvf0eobt84iw0q57bcz7xpb1xzmg58n800241n8vr1zc53hfv8sstrbm7jpkxkiyma1zh3qkzw67da5ptd9mkalh8n9vl27v0br2dpkq9wyow7ex5zcxwq1b5nx0um40e7751i4yhnoc7kiphbpbn4nxdd58bqvxx2v1ksxns311jbls92chvd1yff1j4dtwpegjs7yc0hm9sqr7ejrzfn1mpikibth4zkrsxwemidntsdgu0zvp0j6j1491j0uk2pcqpebucf0mznz2icc2h4tg433b7l9gaf9i5cxe4krubekx73pafwlx63jp4l7w2jvr7i8tzzsc8o4lk57bgb38pe76kee3oan9jdhgmrqeobzzv5syaz1jt84owgahvz340nqs3rvajjhyfs8xag0rzbgj6iw8epn1qmzjh0mvj3prt9a4of8ra2uy9xacmuhnnd3a27idhh0c342izikuzw55yitm0pv7d20uipqfb9poa293iq8255z05b4pw65jvwsv70apmipwg3o48fkd773mn6shit633tjmldj67nyxb5zsdsmtw4ldzzqeft0019jydk8rjuk8dswnakv1hvhu266pafs1vka71bwp1hyuhk6kdipxmklt9omjiwe038j4vou3x079isfc63jfr2350qtmuxdsoh127qhr12fa1qveqa3d1ano0d3zxg52358v201sruloyxloqerng4zhpinzid5j74505onwvna6v8gnmahjgxy1s26v0gks1eerqn7iou7tnd9kx1uom2em1waave15vr82kou6w6nckih8tn7i6dur4tgs961txdqsnk1j5j1vlttgzjdy1goesnv7iqbr6wfsh0gst6c929xql2ytt75qioukgu72xkhrauty0c53p7fp0pxlwuab2t4ev == \x\n\6\h\d\v\4\o\6\t\k\s\m\d\r\i\8\2\e\i\k\l\z\9\r\v\6\x\y\x\v\r\q\t\4\u\h\k\t\9\b\6\r\u\2\c\f\8\a\2\y\4\p\g\1\4\h\6\f\r\m\2\q\u\n\8\f\d\g\k\u\y\3\8\x\u\c\1\q\4\r\l\q\x\x\d\d\p\j\s\e\k\3\o\y\c\v\f\0\e\o\b\t\8\4\i\w\0\q\5\7\b\c\z\7\x\p\b\1\x\z\m\g\5\8\n\8\0\0\2\4\1\n\8\v\r\1\z\c\5\3\h\f\v\8\s\s\t\r\b\m\7\j\p\k\x\k\i\y\m\a\1\z\h\3\q\k\z\w\6\7\d\a\5\p\t\d\9\m\k\a\l\h\8\n\9\v\l\2\7\v\0\b\r\2\d\p\k\q\9\w\y\o\w\7\e\x\5\z\c\x\w\q\1\b\5\n\x\0\u\m\4\0\e\7\7\5\1\i\4\y\h\n\o\c\7\k\i\p\h\b\p\b\n\4\n\x\d\d\5\8\b\q\v\x\x\2\v\1\k\s\x\n\s\3\1\1\j\b\l\s\9\2\c\h\v\d\1\y\f\f\1\j\4\d\t\w\p\e\g\j\s\7\y\c\0\h\m\9\s\q\r\7\e\j\r\z\f\n\1\m\p\i\k\i\b\t\h\4\z\k\r\s\x\w\e\m\i\d\n\t\s\d\g\u\0\z\v\p\0\j\6\j\1\4\9\1\j\0\u\k\2\p\c\q\p\e\b\u\c\f\0\m\z\n\z\2\i\c\c\2\h\4\t\g\4\3\3\b\7\l\9\g\a\f\9\i\5\c\x\e\4\k\r\u\b\e\k\x\7\3\p\a\f\w\l\x\6\3\j\p\4\l\7\w\2\j\v\r\7\i\8\t\z\z\s\c\8\o\4\l\k\5\7\b\g\b\3\8\p\e\7\6\k\e\e\3\o\a\n\9\j\d\h\g\m\r\q\e\o\b\z\z\v\5\s\y\a\z\1\j\t\8\4\o\w\g\a\h\v\z\3\4\0\n\q\s\3\r\v\a\j\j\h\y\f\s\8\x\a\g\0\r\z\b\g\j\6\i\w\8\e\p\n\1\q\m\z\j\h\0\m\v\j\3\p\r\t\9\a\4\o\f\8\r\a\2\u\y\9\x\a\c\m\u\h\n\n\d\3\a\2\7\i\d\h\h\0\c\3\4\2\i\z\i\k\u\z\w\5\5\y\i\t\m\0\p\v\7\d\2\0\u\i\p\q\f\b\9\p\o\a\2\9\3\i\q\8\2\5\5\z\0\5\b\4\p\w\6\5\j\v\w\s\v\7\0\a\p\m\i\p\w\g\3\o\4\8\f\k\d\7\7\3\m\n\6\s\h\i\t\6\3\3\t\j\m\l\d\j\6\7\n\y\x\b\5\z\s\d\s\m\t\w\4\l\d\z\z\q\e\f\t\0\0\1\9\j\y\d\k\8\r\j\u\k\8\d\s\w\n\a\k\v\1\h\v\h\u\2\6\6\p\a\f\s\1\v\k\a\7\1\b\w\p\1\h\y\u\h\k\6\k\d\i\p\x\m\k\l\t\9\o\m\j\i\w\e\0\3\8\j\4\v\o\u\3\x\0\7\9\i\s\f\c\6\3\j\f\r\2\3\5\0\q\t\m\u\x\d\s\o\h\1\2\7\q\h\r\1\2\f\a\1\q\v\e\q\a\3\d\1\a\n\o\0\d\3\z\x\g\5\2\3\5\8\v\2\0\1\s\r\u\l\o\y\x\l\o\q\e\r\n\g\4\z\h\p\i\n\z\i\d\5\j\7\4\5\0\5\o\n\w\v\n\a\6\v\8\g\n\m\a\h\j\g\x\y\1\s\2\6\v\0\g\k\s\1\e\e\r\q\n\7\i\o\u\7\t\n\d\9\k\x\1\u\o\m\2\e\m\1\w\a\a\v\e\1\5\v\r\8\2\k\o\u\6\w\6\n\c\k\i\h\8\t\n\7\i\6\d\u\r\4\t\g\s\9\6\1\t\x\d\q\s\n\k\1\j\5\j\1\v\l\t\t\g\z\j\d\y\1\g\o\e\s\n\v\7\i\q\b\r\6\w\f\s\h\0\g\s\t\6\c\9\2\9\x\q\l\2\y\t\t\7\5\q\i\o\u\k\g\u\7\2\x\k\h\r\a\u\t\y\0\c\5\3\p\7\f\p\0\p\x\l\w\u\a\b\2\t\4\e\v ]] 00:08:10.341 18:25:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:10.598 18:25:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:10.598 18:25:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:08:10.598 18:25:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:10.598 18:25:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:10.598 [2024-12-08 18:25:28.519019] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:10.598 [2024-12-08 18:25:28.519114] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73291 ] 00:08:10.856 { 00:08:10.856 "subsystems": [ 00:08:10.856 { 00:08:10.856 "subsystem": "bdev", 00:08:10.856 "config": [ 00:08:10.856 { 00:08:10.856 "params": { 00:08:10.856 "block_size": 512, 00:08:10.856 "num_blocks": 1048576, 00:08:10.856 "name": "malloc0" 00:08:10.856 }, 00:08:10.856 "method": "bdev_malloc_create" 00:08:10.856 }, 00:08:10.856 { 00:08:10.856 "params": { 00:08:10.856 "filename": "/dev/zram1", 00:08:10.856 "name": "uring0" 00:08:10.856 }, 00:08:10.856 "method": "bdev_uring_create" 00:08:10.856 }, 00:08:10.856 { 00:08:10.856 "method": "bdev_wait_for_examine" 00:08:10.856 } 00:08:10.856 ] 00:08:10.856 } 00:08:10.856 ] 00:08:10.856 } 00:08:10.856 [2024-12-08 18:25:28.649608] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.856 [2024-12-08 18:25:28.722939] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.856 [2024-12-08 18:25:28.774730] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.232  [2024-12-08T18:25:31.147Z] Copying: 180/512 [MB] (180 MBps) [2024-12-08T18:25:32.085Z] Copying: 346/512 [MB] (166 MBps) [2024-12-08T18:25:32.345Z] Copying: 512/512 [MB] (average 173 MBps) 00:08:14.415 00:08:14.415 18:25:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:14.415 18:25:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:14.415 18:25:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:14.415 18:25:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:14.415 18:25:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:14.415 18:25:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:08:14.415 18:25:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:14.415 18:25:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:14.415 [2024-12-08 18:25:32.316132] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:14.415 [2024-12-08 18:25:32.316249] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73336 ] 00:08:14.415 { 00:08:14.415 "subsystems": [ 00:08:14.415 { 00:08:14.415 "subsystem": "bdev", 00:08:14.415 "config": [ 00:08:14.415 { 00:08:14.415 "params": { 00:08:14.415 "block_size": 512, 00:08:14.415 "num_blocks": 1048576, 00:08:14.415 "name": "malloc0" 00:08:14.415 }, 00:08:14.415 "method": "bdev_malloc_create" 00:08:14.415 }, 00:08:14.415 { 00:08:14.415 "params": { 00:08:14.415 "filename": "/dev/zram1", 00:08:14.415 "name": "uring0" 00:08:14.415 }, 00:08:14.415 "method": "bdev_uring_create" 00:08:14.415 }, 00:08:14.415 { 00:08:14.415 "params": { 00:08:14.415 "name": "uring0" 00:08:14.415 }, 00:08:14.415 "method": "bdev_uring_delete" 00:08:14.415 }, 00:08:14.415 { 00:08:14.415 "method": "bdev_wait_for_examine" 00:08:14.415 } 00:08:14.415 ] 00:08:14.415 } 00:08:14.415 ] 00:08:14.415 } 00:08:14.674 [2024-12-08 18:25:32.449554] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.674 [2024-12-08 18:25:32.536883] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.674 [2024-12-08 18:25:32.588646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.933  [2024-12-08T18:25:33.432Z] Copying: 0/0 [B] (average 0 Bps) 00:08:15.502 00:08:15.502 18:25:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:08:15.502 18:25:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:15.502 18:25:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:08:15.502 18:25:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:08:15.502 18:25:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:15.502 18:25:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:15.502 18:25:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:15.502 18:25:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.502 18:25:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:15.502 18:25:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.502 18:25:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:15.502 18:25:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.502 18:25:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:15.502 18:25:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:15.502 18:25:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:15.502 18:25:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:15.502 { 00:08:15.502 "subsystems": [ 00:08:15.502 { 00:08:15.502 "subsystem": "bdev", 00:08:15.502 "config": [ 00:08:15.502 { 00:08:15.502 "params": { 00:08:15.502 "block_size": 512, 00:08:15.502 "num_blocks": 1048576, 00:08:15.502 "name": "malloc0" 00:08:15.502 }, 00:08:15.502 "method": "bdev_malloc_create" 00:08:15.502 }, 00:08:15.502 { 00:08:15.502 "params": { 00:08:15.502 "filename": "/dev/zram1", 00:08:15.502 "name": "uring0" 00:08:15.502 }, 00:08:15.502 "method": "bdev_uring_create" 00:08:15.502 }, 00:08:15.502 { 00:08:15.502 "params": { 00:08:15.502 "name": "uring0" 00:08:15.502 }, 00:08:15.502 "method": "bdev_uring_delete" 00:08:15.502 }, 00:08:15.502 { 00:08:15.502 "method": "bdev_wait_for_examine" 00:08:15.502 } 00:08:15.502 ] 00:08:15.502 } 00:08:15.502 ] 00:08:15.502 } 00:08:15.502 [2024-12-08 18:25:33.267101] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:15.502 [2024-12-08 18:25:33.267253] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73370 ] 00:08:15.502 [2024-12-08 18:25:33.414936] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.761 [2024-12-08 18:25:33.474373] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.761 [2024-12-08 18:25:33.528630] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.020 [2024-12-08 18:25:33.723039] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:16.020 [2024-12-08 18:25:33.723108] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:16.020 [2024-12-08 18:25:33.723118] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:08:16.020 [2024-12-08 18:25:33.723127] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:16.280 [2024-12-08 18:25:34.027348] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:16.280 18:25:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:08:16.280 18:25:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:16.280 18:25:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:08:16.280 18:25:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:08:16.280 18:25:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:08:16.280 18:25:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:16.280 18:25:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:16.280 18:25:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:08:16.280 18:25:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:08:16.280 18:25:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:08:16.280 18:25:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:08:16.280 18:25:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:16.538 00:08:16.538 real 0m14.716s 00:08:16.538 user 0m9.861s 00:08:16.538 sys 0m12.471s 00:08:16.538 18:25:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.538 18:25:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:16.538 ************************************ 00:08:16.538 END TEST dd_uring_copy 00:08:16.538 ************************************ 00:08:16.538 00:08:16.538 real 0m14.976s 00:08:16.538 user 0m10.010s 00:08:16.538 sys 0m12.580s 00:08:16.538 18:25:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.538 18:25:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:16.538 ************************************ 00:08:16.538 END TEST spdk_dd_uring 00:08:16.539 ************************************ 00:08:16.799 18:25:34 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:16.799 18:25:34 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:16.799 18:25:34 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.799 18:25:34 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:16.799 ************************************ 00:08:16.799 START TEST spdk_dd_sparse 00:08:16.799 ************************************ 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:16.799 * Looking for test storage... 00:08:16.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lcov --version 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:16.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.799 --rc genhtml_branch_coverage=1 00:08:16.799 --rc genhtml_function_coverage=1 00:08:16.799 --rc genhtml_legend=1 00:08:16.799 --rc geninfo_all_blocks=1 00:08:16.799 --rc geninfo_unexecuted_blocks=1 00:08:16.799 00:08:16.799 ' 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:16.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.799 --rc genhtml_branch_coverage=1 00:08:16.799 --rc genhtml_function_coverage=1 00:08:16.799 --rc genhtml_legend=1 00:08:16.799 --rc geninfo_all_blocks=1 00:08:16.799 --rc geninfo_unexecuted_blocks=1 00:08:16.799 00:08:16.799 ' 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:16.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.799 --rc genhtml_branch_coverage=1 00:08:16.799 --rc genhtml_function_coverage=1 00:08:16.799 --rc genhtml_legend=1 00:08:16.799 --rc geninfo_all_blocks=1 00:08:16.799 --rc geninfo_unexecuted_blocks=1 00:08:16.799 00:08:16.799 ' 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:16.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.799 --rc genhtml_branch_coverage=1 00:08:16.799 --rc genhtml_function_coverage=1 00:08:16.799 --rc genhtml_legend=1 00:08:16.799 --rc geninfo_all_blocks=1 00:08:16.799 --rc geninfo_unexecuted_blocks=1 00:08:16.799 00:08:16.799 ' 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:16.799 18:25:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:16.800 1+0 records in 00:08:16.800 1+0 records out 00:08:16.800 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00574088 s, 731 MB/s 00:08:16.800 18:25:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:16.800 1+0 records in 00:08:16.800 1+0 records out 00:08:16.800 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00524487 s, 800 MB/s 00:08:16.800 18:25:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:16.800 1+0 records in 00:08:16.800 1+0 records out 00:08:16.800 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00699937 s, 599 MB/s 00:08:16.800 18:25:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:16.800 18:25:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:16.800 18:25:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.800 18:25:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:16.800 ************************************ 00:08:16.800 START TEST dd_sparse_file_to_file 00:08:16.800 ************************************ 00:08:16.800 18:25:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:08:16.800 18:25:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:16.800 18:25:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:16.800 18:25:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:16.800 18:25:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:16.800 18:25:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:16.800 18:25:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:16.800 18:25:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:16.800 18:25:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:08:16.800 18:25:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:16.800 18:25:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:17.059 [2024-12-08 18:25:34.757749] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:17.059 [2024-12-08 18:25:34.757841] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73475 ] 00:08:17.059 { 00:08:17.059 "subsystems": [ 00:08:17.059 { 00:08:17.059 "subsystem": "bdev", 00:08:17.059 "config": [ 00:08:17.059 { 00:08:17.059 "params": { 00:08:17.059 "block_size": 4096, 00:08:17.059 "filename": "dd_sparse_aio_disk", 00:08:17.059 "name": "dd_aio" 00:08:17.059 }, 00:08:17.059 "method": "bdev_aio_create" 00:08:17.059 }, 00:08:17.059 { 00:08:17.059 "params": { 00:08:17.059 "lvs_name": "dd_lvstore", 00:08:17.059 "bdev_name": "dd_aio" 00:08:17.059 }, 00:08:17.059 "method": "bdev_lvol_create_lvstore" 00:08:17.059 }, 00:08:17.059 { 00:08:17.059 "method": "bdev_wait_for_examine" 00:08:17.059 } 00:08:17.059 ] 00:08:17.059 } 00:08:17.059 ] 00:08:17.059 } 00:08:17.059 [2024-12-08 18:25:34.895518] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.059 [2024-12-08 18:25:34.958695] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.318 [2024-12-08 18:25:35.014039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.319  [2024-12-08T18:25:35.509Z] Copying: 12/36 [MB] (average 800 MBps) 00:08:17.579 00:08:17.579 18:25:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:17.579 18:25:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:17.579 18:25:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:17.579 18:25:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:17.579 18:25:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:17.579 18:25:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:17.579 18:25:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:17.579 18:25:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:17.579 18:25:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:17.579 18:25:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:17.579 00:08:17.579 real 0m0.674s 00:08:17.579 user 0m0.397s 00:08:17.579 sys 0m0.379s 00:08:17.579 18:25:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.579 ************************************ 00:08:17.579 END TEST dd_sparse_file_to_file 00:08:17.579 18:25:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:17.579 ************************************ 00:08:17.579 18:25:35 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:17.579 18:25:35 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:17.579 18:25:35 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.579 18:25:35 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:17.579 ************************************ 00:08:17.579 START TEST dd_sparse_file_to_bdev 00:08:17.579 ************************************ 00:08:17.579 18:25:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:08:17.579 18:25:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:17.579 18:25:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:17.579 18:25:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:08:17.579 18:25:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:17.579 18:25:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:17.579 18:25:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:08:17.579 18:25:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:17.579 18:25:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:17.579 [2024-12-08 18:25:35.482647] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:17.579 [2024-12-08 18:25:35.482780] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73512 ] 00:08:17.579 { 00:08:17.579 "subsystems": [ 00:08:17.579 { 00:08:17.579 "subsystem": "bdev", 00:08:17.579 "config": [ 00:08:17.579 { 00:08:17.579 "params": { 00:08:17.579 "block_size": 4096, 00:08:17.579 "filename": "dd_sparse_aio_disk", 00:08:17.579 "name": "dd_aio" 00:08:17.579 }, 00:08:17.579 "method": "bdev_aio_create" 00:08:17.579 }, 00:08:17.579 { 00:08:17.579 "params": { 00:08:17.579 "lvs_name": "dd_lvstore", 00:08:17.579 "lvol_name": "dd_lvol", 00:08:17.579 "size_in_mib": 36, 00:08:17.579 "thin_provision": true 00:08:17.579 }, 00:08:17.579 "method": "bdev_lvol_create" 00:08:17.579 }, 00:08:17.579 { 00:08:17.579 "method": "bdev_wait_for_examine" 00:08:17.579 } 00:08:17.579 ] 00:08:17.579 } 00:08:17.579 ] 00:08:17.579 } 00:08:17.837 [2024-12-08 18:25:35.619831] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.837 [2024-12-08 18:25:35.687432] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.837 [2024-12-08 18:25:35.745066] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.095  [2024-12-08T18:25:36.283Z] Copying: 12/36 [MB] (average 521 MBps) 00:08:18.353 00:08:18.353 00:08:18.353 real 0m0.631s 00:08:18.353 user 0m0.383s 00:08:18.353 sys 0m0.357s 00:08:18.353 18:25:36 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.353 ************************************ 00:08:18.353 END TEST dd_sparse_file_to_bdev 00:08:18.353 ************************************ 00:08:18.353 18:25:36 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:18.353 18:25:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:18.353 18:25:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:18.353 18:25:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.353 18:25:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:18.353 ************************************ 00:08:18.353 START TEST dd_sparse_bdev_to_file 00:08:18.353 ************************************ 00:08:18.353 18:25:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:08:18.353 18:25:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:18.353 18:25:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:18.353 18:25:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:18.353 18:25:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:18.353 18:25:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:18.353 18:25:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:18.353 18:25:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:18.353 18:25:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:18.353 [2024-12-08 18:25:36.166680] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:18.353 [2024-12-08 18:25:36.166782] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73550 ] 00:08:18.353 { 00:08:18.353 "subsystems": [ 00:08:18.353 { 00:08:18.353 "subsystem": "bdev", 00:08:18.353 "config": [ 00:08:18.354 { 00:08:18.354 "params": { 00:08:18.354 "block_size": 4096, 00:08:18.354 "filename": "dd_sparse_aio_disk", 00:08:18.354 "name": "dd_aio" 00:08:18.354 }, 00:08:18.354 "method": "bdev_aio_create" 00:08:18.354 }, 00:08:18.354 { 00:08:18.354 "method": "bdev_wait_for_examine" 00:08:18.354 } 00:08:18.354 ] 00:08:18.354 } 00:08:18.354 ] 00:08:18.354 } 00:08:18.611 [2024-12-08 18:25:36.304626] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.611 [2024-12-08 18:25:36.384669] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.611 [2024-12-08 18:25:36.447015] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.611  [2024-12-08T18:25:36.800Z] Copying: 12/36 [MB] (average 923 MBps) 00:08:18.870 00:08:19.130 18:25:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:19.130 18:25:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:19.130 18:25:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:19.130 18:25:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:19.130 18:25:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:19.130 18:25:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:19.130 18:25:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:19.130 18:25:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:19.130 18:25:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:19.130 18:25:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:19.130 00:08:19.130 real 0m0.713s 00:08:19.130 user 0m0.440s 00:08:19.130 sys 0m0.399s 00:08:19.130 18:25:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.130 ************************************ 00:08:19.130 END TEST dd_sparse_bdev_to_file 00:08:19.130 ************************************ 00:08:19.130 18:25:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:19.130 18:25:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:19.130 18:25:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:19.130 18:25:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:19.130 18:25:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:19.130 18:25:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:19.130 00:08:19.130 real 0m2.417s 00:08:19.130 user 0m1.407s 00:08:19.130 sys 0m1.338s 00:08:19.130 ************************************ 00:08:19.130 END TEST spdk_dd_sparse 00:08:19.130 ************************************ 00:08:19.130 18:25:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.130 18:25:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:19.130 18:25:36 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:19.130 18:25:36 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:19.130 18:25:36 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.130 18:25:36 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:19.130 ************************************ 00:08:19.130 START TEST spdk_dd_negative 00:08:19.130 ************************************ 00:08:19.130 18:25:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:19.130 * Looking for test storage... 00:08:19.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:19.130 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:19.130 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lcov --version 00:08:19.130 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:19.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.390 --rc genhtml_branch_coverage=1 00:08:19.390 --rc genhtml_function_coverage=1 00:08:19.390 --rc genhtml_legend=1 00:08:19.390 --rc geninfo_all_blocks=1 00:08:19.390 --rc geninfo_unexecuted_blocks=1 00:08:19.390 00:08:19.390 ' 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:19.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.390 --rc genhtml_branch_coverage=1 00:08:19.390 --rc genhtml_function_coverage=1 00:08:19.390 --rc genhtml_legend=1 00:08:19.390 --rc geninfo_all_blocks=1 00:08:19.390 --rc geninfo_unexecuted_blocks=1 00:08:19.390 00:08:19.390 ' 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:19.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.390 --rc genhtml_branch_coverage=1 00:08:19.390 --rc genhtml_function_coverage=1 00:08:19.390 --rc genhtml_legend=1 00:08:19.390 --rc geninfo_all_blocks=1 00:08:19.390 --rc geninfo_unexecuted_blocks=1 00:08:19.390 00:08:19.390 ' 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:19.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.390 --rc genhtml_branch_coverage=1 00:08:19.390 --rc genhtml_function_coverage=1 00:08:19.390 --rc genhtml_legend=1 00:08:19.390 --rc geninfo_all_blocks=1 00:08:19.390 --rc geninfo_unexecuted_blocks=1 00:08:19.390 00:08:19.390 ' 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.390 18:25:37 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.391 18:25:37 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.391 18:25:37 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.391 18:25:37 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:19.391 18:25:37 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.391 18:25:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:19.391 18:25:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:19.391 18:25:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:19.391 18:25:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:19.391 18:25:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:08:19.391 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:19.391 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.391 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:19.391 ************************************ 00:08:19.391 START TEST dd_invalid_arguments 00:08:19.391 ************************************ 00:08:19.391 18:25:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:08:19.391 18:25:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:19.391 18:25:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:08:19.391 18:25:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:19.391 18:25:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.391 18:25:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.391 18:25:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.391 18:25:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.391 18:25:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.391 18:25:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.391 18:25:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.391 18:25:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:19.391 18:25:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:19.391 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:19.391 00:08:19.391 CPU options: 00:08:19.391 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:19.391 (like [0,1,10]) 00:08:19.391 --lcores lcore to CPU mapping list. The list is in the format: 00:08:19.391 [<,lcores[@CPUs]>...] 00:08:19.391 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:19.391 Within the group, '-' is used for range separator, 00:08:19.391 ',' is used for single number separator. 00:08:19.391 '( )' can be omitted for single element group, 00:08:19.391 '@' can be omitted if cpus and lcores have the same value 00:08:19.391 --disable-cpumask-locks Disable CPU core lock files. 00:08:19.391 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:19.391 pollers in the app support interrupt mode) 00:08:19.391 -p, --main-core main (primary) core for DPDK 00:08:19.391 00:08:19.391 Configuration options: 00:08:19.391 -c, --config, --json JSON config file 00:08:19.391 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:19.391 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:19.391 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:19.391 --rpcs-allowed comma-separated list of permitted RPCS 00:08:19.391 --json-ignore-init-errors don't exit on invalid config entry 00:08:19.391 00:08:19.391 Memory options: 00:08:19.391 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:19.391 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:19.391 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:19.391 -R, --huge-unlink unlink huge files after initialization 00:08:19.391 -n, --mem-channels number of memory channels used for DPDK 00:08:19.391 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:19.391 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:19.391 --no-huge run without using hugepages 00:08:19.391 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:08:19.391 -i, --shm-id shared memory ID (optional) 00:08:19.391 -g, --single-file-segments force creating just one hugetlbfs file 00:08:19.391 00:08:19.391 PCI options: 00:08:19.391 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:19.391 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:19.391 -u, --no-pci disable PCI access 00:08:19.391 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:19.391 00:08:19.391 Log options: 00:08:19.391 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:19.391 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:19.391 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:19.391 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:19.391 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:08:19.391 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:08:19.391 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:08:19.391 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:08:19.391 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:08:19.391 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:08:19.391 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:08:19.391 --silence-noticelog disable notice level logging to stderr 00:08:19.391 00:08:19.391 Trace options: 00:08:19.391 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:19.391 setting 0 to disable trace (default 32768) 00:08:19.391 Tracepoints vary in size and can use more than one trace entry. 00:08:19.391 -e, --tpoint-group [:] 00:08:19.391 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:19.391 [2024-12-08 18:25:37.234174] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:08:19.391 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:08:19.391 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:08:19.391 bdev_raid, all). 00:08:19.391 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:19.391 a tracepoint group. First tpoint inside a group can be enabled by 00:08:19.391 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:19.391 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:19.391 in /include/spdk_internal/trace_defs.h 00:08:19.391 00:08:19.391 Other options: 00:08:19.391 -h, --help show this usage 00:08:19.391 -v, --version print SPDK version 00:08:19.391 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:19.391 --env-context Opaque context for use of the env implementation 00:08:19.391 00:08:19.391 Application specific: 00:08:19.391 [--------- DD Options ---------] 00:08:19.391 --if Input file. Must specify either --if or --ib. 00:08:19.391 --ib Input bdev. Must specifier either --if or --ib 00:08:19.391 --of Output file. Must specify either --of or --ob. 00:08:19.391 --ob Output bdev. Must specify either --of or --ob. 00:08:19.391 --iflag Input file flags. 00:08:19.391 --oflag Output file flags. 00:08:19.391 --bs I/O unit size (default: 4096) 00:08:19.391 --qd Queue depth (default: 2) 00:08:19.391 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:19.391 --skip Skip this many I/O units at start of input. (default: 0) 00:08:19.391 --seek Skip this many I/O units at start of output. (default: 0) 00:08:19.391 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:19.391 --sparse Enable hole skipping in input target 00:08:19.391 Available iflag and oflag values: 00:08:19.391 append - append mode 00:08:19.391 direct - use direct I/O for data 00:08:19.391 directory - fail unless a directory 00:08:19.391 dsync - use synchronized I/O for data 00:08:19.391 noatime - do not update access time 00:08:19.391 noctty - do not assign controlling terminal from file 00:08:19.391 nofollow - do not follow symlinks 00:08:19.392 nonblock - use non-blocking I/O 00:08:19.392 sync - use synchronized I/O for data and metadata 00:08:19.392 18:25:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:08:19.392 18:25:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:19.392 18:25:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:19.392 18:25:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:19.392 00:08:19.392 real 0m0.076s 00:08:19.392 user 0m0.041s 00:08:19.392 sys 0m0.034s 00:08:19.392 18:25:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.392 18:25:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:08:19.392 ************************************ 00:08:19.392 END TEST dd_invalid_arguments 00:08:19.392 ************************************ 00:08:19.392 18:25:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:08:19.392 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:19.392 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.392 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:19.392 ************************************ 00:08:19.392 START TEST dd_double_input 00:08:19.392 ************************************ 00:08:19.392 18:25:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:08:19.392 18:25:37 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:19.392 18:25:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:08:19.392 18:25:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:19.392 18:25:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.392 18:25:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.392 18:25:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.392 18:25:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.392 18:25:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.392 18:25:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.392 18:25:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.392 18:25:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:19.392 18:25:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:19.651 [2024-12-08 18:25:37.365939] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:19.651 18:25:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:08:19.651 18:25:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:19.651 18:25:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:19.651 18:25:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:19.651 00:08:19.651 real 0m0.076s 00:08:19.651 user 0m0.054s 00:08:19.651 sys 0m0.021s 00:08:19.651 18:25:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.651 18:25:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:08:19.651 ************************************ 00:08:19.651 END TEST dd_double_input 00:08:19.651 ************************************ 00:08:19.651 18:25:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:08:19.651 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:19.651 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.651 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:19.651 ************************************ 00:08:19.651 START TEST dd_double_output 00:08:19.651 ************************************ 00:08:19.651 18:25:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:08:19.651 18:25:37 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:19.651 18:25:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:08:19.651 18:25:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:19.651 18:25:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.651 18:25:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.651 18:25:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.651 18:25:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.651 18:25:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.651 18:25:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.651 18:25:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.651 18:25:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:19.651 18:25:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:19.651 [2024-12-08 18:25:37.496063] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:19.651 18:25:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:08:19.651 18:25:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:19.651 18:25:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:19.651 18:25:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:19.651 00:08:19.651 real 0m0.075s 00:08:19.651 user 0m0.048s 00:08:19.651 sys 0m0.026s 00:08:19.651 18:25:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.652 18:25:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:08:19.652 ************************************ 00:08:19.652 END TEST dd_double_output 00:08:19.652 ************************************ 00:08:19.652 18:25:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:08:19.652 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:19.652 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.652 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:19.652 ************************************ 00:08:19.652 START TEST dd_no_input 00:08:19.652 ************************************ 00:08:19.652 18:25:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:08:19.652 18:25:37 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:19.652 18:25:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:08:19.652 18:25:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:19.652 18:25:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.652 18:25:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.652 18:25:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.652 18:25:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.652 18:25:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.652 18:25:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.652 18:25:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.652 18:25:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:19.652 18:25:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:19.911 [2024-12-08 18:25:37.627705] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:19.911 00:08:19.911 real 0m0.075s 00:08:19.911 user 0m0.052s 00:08:19.911 sys 0m0.022s 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:08:19.911 ************************************ 00:08:19.911 END TEST dd_no_input 00:08:19.911 ************************************ 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:19.911 ************************************ 00:08:19.911 START TEST dd_no_output 00:08:19.911 ************************************ 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:19.911 [2024-12-08 18:25:37.757321] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:19.911 00:08:19.911 real 0m0.073s 00:08:19.911 user 0m0.044s 00:08:19.911 sys 0m0.029s 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:08:19.911 ************************************ 00:08:19.911 END TEST dd_no_output 00:08:19.911 ************************************ 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:19.911 ************************************ 00:08:19.911 START TEST dd_wrong_blocksize 00:08:19.911 ************************************ 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.911 18:25:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.912 18:25:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.912 18:25:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.912 18:25:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.912 18:25:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.912 18:25:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:19.912 18:25:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:20.171 [2024-12-08 18:25:37.884334] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:08:20.171 18:25:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:08:20.171 18:25:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:20.171 18:25:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:20.171 18:25:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:20.171 00:08:20.171 real 0m0.071s 00:08:20.171 user 0m0.039s 00:08:20.171 sys 0m0.031s 00:08:20.171 18:25:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.171 18:25:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:20.171 ************************************ 00:08:20.171 END TEST dd_wrong_blocksize 00:08:20.171 ************************************ 00:08:20.171 18:25:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:20.171 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:20.171 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.171 18:25:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:20.171 ************************************ 00:08:20.171 START TEST dd_smaller_blocksize 00:08:20.171 ************************************ 00:08:20.171 18:25:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:08:20.171 18:25:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:20.171 18:25:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:08:20.171 18:25:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:20.171 18:25:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.171 18:25:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.171 18:25:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.171 18:25:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.171 18:25:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.171 18:25:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.171 18:25:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.171 18:25:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:20.171 18:25:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:20.171 [2024-12-08 18:25:38.013610] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:20.171 [2024-12-08 18:25:38.014139] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73782 ] 00:08:20.430 [2024-12-08 18:25:38.152603] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.430 [2024-12-08 18:25:38.243851] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.430 [2024-12-08 18:25:38.325769] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.690 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:20.690 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:20.690 [2024-12-08 18:25:38.374556] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:20.690 [2024-12-08 18:25:38.374596] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:20.690 [2024-12-08 18:25:38.552426] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:20.950 00:08:20.950 real 0m0.725s 00:08:20.950 user 0m0.423s 00:08:20.950 sys 0m0.195s 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:20.950 ************************************ 00:08:20.950 END TEST dd_smaller_blocksize 00:08:20.950 ************************************ 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:20.950 ************************************ 00:08:20.950 START TEST dd_invalid_count 00:08:20.950 ************************************ 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:20.950 [2024-12-08 18:25:38.790251] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:20.950 00:08:20.950 real 0m0.075s 00:08:20.950 user 0m0.036s 00:08:20.950 sys 0m0.038s 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:08:20.950 ************************************ 00:08:20.950 END TEST dd_invalid_count 00:08:20.950 ************************************ 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:20.950 ************************************ 00:08:20.950 START TEST dd_invalid_oflag 00:08:20.950 ************************************ 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:20.950 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:21.210 [2024-12-08 18:25:38.908391] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:08:21.210 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:08:21.210 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:21.210 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:21.210 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:21.210 00:08:21.210 real 0m0.062s 00:08:21.210 user 0m0.041s 00:08:21.210 sys 0m0.021s 00:08:21.210 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.210 ************************************ 00:08:21.210 END TEST dd_invalid_oflag 00:08:21.210 ************************************ 00:08:21.210 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:08:21.210 18:25:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:08:21.210 18:25:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:21.210 18:25:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.210 18:25:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:21.210 ************************************ 00:08:21.210 START TEST dd_invalid_iflag 00:08:21.210 ************************************ 00:08:21.210 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:08:21.210 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:21.210 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:08:21.210 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:21.210 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.210 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.210 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.210 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.210 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.210 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.210 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.210 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:21.210 18:25:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:21.210 [2024-12-08 18:25:39.023327] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:08:21.210 18:25:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:08:21.210 18:25:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:21.210 18:25:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:21.210 18:25:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:21.210 00:08:21.210 real 0m0.058s 00:08:21.210 user 0m0.034s 00:08:21.210 sys 0m0.024s 00:08:21.210 18:25:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.210 18:25:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:08:21.210 ************************************ 00:08:21.210 END TEST dd_invalid_iflag 00:08:21.210 ************************************ 00:08:21.210 18:25:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:08:21.210 18:25:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:21.210 18:25:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.210 18:25:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:21.210 ************************************ 00:08:21.210 START TEST dd_unknown_flag 00:08:21.210 ************************************ 00:08:21.210 18:25:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:08:21.210 18:25:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:21.210 18:25:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:08:21.210 18:25:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:21.210 18:25:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.210 18:25:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.210 18:25:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.210 18:25:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.210 18:25:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.210 18:25:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.210 18:25:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.210 18:25:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:21.211 18:25:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:21.470 [2024-12-08 18:25:39.147221] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:21.470 [2024-12-08 18:25:39.147333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73874 ] 00:08:21.470 [2024-12-08 18:25:39.284924] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.470 [2024-12-08 18:25:39.367240] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.730 [2024-12-08 18:25:39.443611] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:21.730 [2024-12-08 18:25:39.487950] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:21.730 [2024-12-08 18:25:39.488013] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:21.730 [2024-12-08 18:25:39.488077] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:21.730 [2024-12-08 18:25:39.488091] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:21.730 [2024-12-08 18:25:39.488373] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:21.730 [2024-12-08 18:25:39.488389] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:21.730 [2024-12-08 18:25:39.488491] app.c:1046:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:21.730 [2024-12-08 18:25:39.488503] app.c:1046:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:21.730 [2024-12-08 18:25:39.656851] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:21.989 18:25:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:08:21.989 18:25:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:21.989 18:25:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:08:21.989 18:25:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:08:21.989 18:25:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:08:21.989 18:25:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:21.989 00:08:21.989 real 0m0.697s 00:08:21.989 user 0m0.394s 00:08:21.989 sys 0m0.209s 00:08:21.989 18:25:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.989 18:25:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:08:21.989 ************************************ 00:08:21.989 END TEST dd_unknown_flag 00:08:21.989 ************************************ 00:08:21.989 18:25:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:08:21.989 18:25:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:21.989 18:25:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.989 18:25:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:21.989 ************************************ 00:08:21.989 START TEST dd_invalid_json 00:08:21.989 ************************************ 00:08:21.989 18:25:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:08:21.989 18:25:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:21.989 18:25:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:08:21.990 18:25:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:08:21.990 18:25:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:21.990 18:25:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.990 18:25:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.990 18:25:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.990 18:25:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.990 18:25:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.990 18:25:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.990 18:25:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.990 18:25:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:21.990 18:25:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:21.990 [2024-12-08 18:25:39.901116] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:21.990 [2024-12-08 18:25:39.901216] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73903 ] 00:08:22.248 [2024-12-08 18:25:40.039830] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.248 [2024-12-08 18:25:40.121059] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.248 [2024-12-08 18:25:40.121139] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:22.248 [2024-12-08 18:25:40.121152] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:22.248 [2024-12-08 18:25:40.121161] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:22.248 [2024-12-08 18:25:40.121200] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:22.506 00:08:22.506 real 0m0.383s 00:08:22.506 user 0m0.195s 00:08:22.506 sys 0m0.087s 00:08:22.506 ************************************ 00:08:22.506 END TEST dd_invalid_json 00:08:22.506 ************************************ 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:22.506 ************************************ 00:08:22.506 START TEST dd_invalid_seek 00:08:22.506 ************************************ 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1125 -- # invalid_seek 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:22.506 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:22.506 [2024-12-08 18:25:40.340225] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:22.506 [2024-12-08 18:25:40.340329] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73932 ] 00:08:22.506 { 00:08:22.506 "subsystems": [ 00:08:22.506 { 00:08:22.506 "subsystem": "bdev", 00:08:22.506 "config": [ 00:08:22.506 { 00:08:22.506 "params": { 00:08:22.506 "block_size": 512, 00:08:22.506 "num_blocks": 512, 00:08:22.506 "name": "malloc0" 00:08:22.506 }, 00:08:22.506 "method": "bdev_malloc_create" 00:08:22.506 }, 00:08:22.506 { 00:08:22.506 "params": { 00:08:22.506 "block_size": 512, 00:08:22.506 "num_blocks": 512, 00:08:22.506 "name": "malloc1" 00:08:22.506 }, 00:08:22.506 "method": "bdev_malloc_create" 00:08:22.506 }, 00:08:22.506 { 00:08:22.506 "method": "bdev_wait_for_examine" 00:08:22.506 } 00:08:22.506 ] 00:08:22.506 } 00:08:22.506 ] 00:08:22.506 } 00:08:22.765 [2024-12-08 18:25:40.477299] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.765 [2024-12-08 18:25:40.556086] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.765 [2024-12-08 18:25:40.631933] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.024 [2024-12-08 18:25:40.704242] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:08:23.024 [2024-12-08 18:25:40.704303] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:23.024 [2024-12-08 18:25:40.871823] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:23.283 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:08:23.283 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:23.283 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:08:23.283 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:08:23.283 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:08:23.283 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:23.283 00:08:23.283 real 0m0.715s 00:08:23.283 user 0m0.462s 00:08:23.283 sys 0m0.214s 00:08:23.283 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.283 18:25:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:23.283 ************************************ 00:08:23.283 END TEST dd_invalid_seek 00:08:23.283 ************************************ 00:08:23.283 18:25:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:08:23.283 18:25:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:23.283 18:25:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.283 18:25:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:23.283 ************************************ 00:08:23.283 START TEST dd_invalid_skip 00:08:23.283 ************************************ 00:08:23.283 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1125 -- # invalid_skip 00:08:23.283 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:23.283 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:23.283 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:08:23.283 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:23.283 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:23.283 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:08:23.283 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:23.283 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:08:23.283 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:08:23.283 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:23.283 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:08:23.283 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.283 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:23.283 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.283 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.283 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.283 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.283 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.283 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.283 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:23.283 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:23.283 { 00:08:23.283 "subsystems": [ 00:08:23.283 { 00:08:23.283 "subsystem": "bdev", 00:08:23.283 "config": [ 00:08:23.283 { 00:08:23.283 "params": { 00:08:23.283 "block_size": 512, 00:08:23.283 "num_blocks": 512, 00:08:23.283 "name": "malloc0" 00:08:23.283 }, 00:08:23.283 "method": "bdev_malloc_create" 00:08:23.283 }, 00:08:23.283 { 00:08:23.283 "params": { 00:08:23.283 "block_size": 512, 00:08:23.283 "num_blocks": 512, 00:08:23.283 "name": "malloc1" 00:08:23.283 }, 00:08:23.283 "method": "bdev_malloc_create" 00:08:23.283 }, 00:08:23.283 { 00:08:23.283 "method": "bdev_wait_for_examine" 00:08:23.283 } 00:08:23.284 ] 00:08:23.284 } 00:08:23.284 ] 00:08:23.284 } 00:08:23.284 [2024-12-08 18:25:41.118748] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:23.284 [2024-12-08 18:25:41.118847] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73971 ] 00:08:23.543 [2024-12-08 18:25:41.257227] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.543 [2024-12-08 18:25:41.340369] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.543 [2024-12-08 18:25:41.418232] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.801 [2024-12-08 18:25:41.489982] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:08:23.801 [2024-12-08 18:25:41.490047] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:23.801 [2024-12-08 18:25:41.656429] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:24.061 00:08:24.061 real 0m0.700s 00:08:24.061 user 0m0.430s 00:08:24.061 sys 0m0.230s 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:24.061 ************************************ 00:08:24.061 END TEST dd_invalid_skip 00:08:24.061 ************************************ 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:24.061 ************************************ 00:08:24.061 START TEST dd_invalid_input_count 00:08:24.061 ************************************ 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1125 -- # invalid_input_count 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:24.061 18:25:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:24.061 [2024-12-08 18:25:41.887964] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:24.061 [2024-12-08 18:25:41.888099] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74005 ] 00:08:24.061 { 00:08:24.061 "subsystems": [ 00:08:24.061 { 00:08:24.061 "subsystem": "bdev", 00:08:24.061 "config": [ 00:08:24.061 { 00:08:24.061 "params": { 00:08:24.061 "block_size": 512, 00:08:24.061 "num_blocks": 512, 00:08:24.061 "name": "malloc0" 00:08:24.061 }, 00:08:24.061 "method": "bdev_malloc_create" 00:08:24.061 }, 00:08:24.061 { 00:08:24.061 "params": { 00:08:24.061 "block_size": 512, 00:08:24.061 "num_blocks": 512, 00:08:24.061 "name": "malloc1" 00:08:24.061 }, 00:08:24.061 "method": "bdev_malloc_create" 00:08:24.061 }, 00:08:24.061 { 00:08:24.061 "method": "bdev_wait_for_examine" 00:08:24.061 } 00:08:24.061 ] 00:08:24.061 } 00:08:24.061 ] 00:08:24.061 } 00:08:24.319 [2024-12-08 18:25:42.023843] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.319 [2024-12-08 18:25:42.111396] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.319 [2024-12-08 18:25:42.188999] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:24.578 [2024-12-08 18:25:42.258556] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:08:24.578 [2024-12-08 18:25:42.258618] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:24.578 [2024-12-08 18:25:42.425240] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:24.850 18:25:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:08:24.850 18:25:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:24.850 18:25:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:08:24.850 18:25:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:08:24.850 18:25:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:08:24.850 18:25:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:24.850 00:08:24.850 real 0m0.747s 00:08:24.850 user 0m0.507s 00:08:24.850 sys 0m0.224s 00:08:24.850 18:25:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.850 18:25:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:24.850 ************************************ 00:08:24.850 END TEST dd_invalid_input_count 00:08:24.850 ************************************ 00:08:24.850 18:25:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:08:24.850 18:25:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.850 18:25:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.850 18:25:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:24.850 ************************************ 00:08:24.850 START TEST dd_invalid_output_count 00:08:24.850 ************************************ 00:08:24.850 18:25:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1125 -- # invalid_output_count 00:08:24.850 18:25:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:24.850 18:25:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:24.850 18:25:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:08:24.850 18:25:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:24.850 18:25:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:08:24.850 18:25:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:08:24.850 18:25:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:24.850 18:25:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:08:24.850 18:25:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:24.850 18:25:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.850 18:25:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.851 18:25:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.851 18:25:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.851 18:25:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.851 18:25:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.851 18:25:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.851 18:25:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:24.851 18:25:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:24.851 { 00:08:24.851 "subsystems": [ 00:08:24.851 { 00:08:24.851 "subsystem": "bdev", 00:08:24.851 "config": [ 00:08:24.851 { 00:08:24.851 "params": { 00:08:24.851 "block_size": 512, 00:08:24.851 "num_blocks": 512, 00:08:24.851 "name": "malloc0" 00:08:24.851 }, 00:08:24.851 "method": "bdev_malloc_create" 00:08:24.851 }, 00:08:24.851 { 00:08:24.851 "method": "bdev_wait_for_examine" 00:08:24.851 } 00:08:24.851 ] 00:08:24.851 } 00:08:24.851 ] 00:08:24.851 } 00:08:24.851 [2024-12-08 18:25:42.671663] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:24.851 [2024-12-08 18:25:42.672333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74038 ] 00:08:25.142 [2024-12-08 18:25:42.808840] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.142 [2024-12-08 18:25:42.887010] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.142 [2024-12-08 18:25:42.961021] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.142 [2024-12-08 18:25:43.024840] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:08:25.142 [2024-12-08 18:25:43.024901] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:25.401 [2024-12-08 18:25:43.190635] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:25.401 18:25:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:08:25.401 18:25:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:25.401 18:25:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:08:25.401 18:25:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:08:25.401 18:25:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:08:25.401 18:25:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:25.401 00:08:25.401 real 0m0.671s 00:08:25.401 user 0m0.407s 00:08:25.401 sys 0m0.219s 00:08:25.401 18:25:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.401 18:25:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:25.401 ************************************ 00:08:25.401 END TEST dd_invalid_output_count 00:08:25.401 ************************************ 00:08:25.401 18:25:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:08:25.401 18:25:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:25.401 18:25:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.401 18:25:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:25.661 ************************************ 00:08:25.661 START TEST dd_bs_not_multiple 00:08:25.661 ************************************ 00:08:25.661 18:25:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1125 -- # bs_not_multiple 00:08:25.661 18:25:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:25.661 18:25:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:25.661 18:25:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:08:25.661 18:25:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:25.661 18:25:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:25.661 18:25:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:08:25.661 18:25:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:25.661 18:25:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:08:25.661 18:25:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:08:25.661 18:25:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:25.661 18:25:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:08:25.661 18:25:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.661 18:25:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:25.661 18:25:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.661 18:25:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.661 18:25:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.661 18:25:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.661 18:25:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.661 18:25:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.661 18:25:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:25.661 18:25:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:25.661 [2024-12-08 18:25:43.393547] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:25.661 [2024-12-08 18:25:43.393639] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74075 ] 00:08:25.661 { 00:08:25.661 "subsystems": [ 00:08:25.661 { 00:08:25.661 "subsystem": "bdev", 00:08:25.661 "config": [ 00:08:25.661 { 00:08:25.661 "params": { 00:08:25.661 "block_size": 512, 00:08:25.661 "num_blocks": 512, 00:08:25.661 "name": "malloc0" 00:08:25.661 }, 00:08:25.661 "method": "bdev_malloc_create" 00:08:25.661 }, 00:08:25.661 { 00:08:25.661 "params": { 00:08:25.661 "block_size": 512, 00:08:25.661 "num_blocks": 512, 00:08:25.661 "name": "malloc1" 00:08:25.661 }, 00:08:25.661 "method": "bdev_malloc_create" 00:08:25.661 }, 00:08:25.661 { 00:08:25.661 "method": "bdev_wait_for_examine" 00:08:25.661 } 00:08:25.661 ] 00:08:25.661 } 00:08:25.661 ] 00:08:25.661 } 00:08:25.661 [2024-12-08 18:25:43.521816] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.920 [2024-12-08 18:25:43.602662] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.920 [2024-12-08 18:25:43.676860] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.920 [2024-12-08 18:25:43.746486] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:08:25.920 [2024-12-08 18:25:43.746551] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:26.179 [2024-12-08 18:25:43.909490] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:26.179 18:25:44 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:08:26.179 18:25:44 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:26.179 18:25:44 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:08:26.179 18:25:44 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:08:26.179 18:25:44 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:08:26.179 18:25:44 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:26.179 00:08:26.179 real 0m0.693s 00:08:26.179 user 0m0.436s 00:08:26.179 sys 0m0.214s 00:08:26.179 18:25:44 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.179 18:25:44 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:26.179 ************************************ 00:08:26.179 END TEST dd_bs_not_multiple 00:08:26.179 ************************************ 00:08:26.179 00:08:26.179 real 0m7.120s 00:08:26.179 user 0m4.031s 00:08:26.179 sys 0m2.517s 00:08:26.179 18:25:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.179 18:25:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:26.179 ************************************ 00:08:26.179 END TEST spdk_dd_negative 00:08:26.179 ************************************ 00:08:26.438 00:08:26.438 real 1m15.809s 00:08:26.438 user 0m47.340s 00:08:26.438 sys 0m34.589s 00:08:26.438 18:25:44 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.438 ************************************ 00:08:26.438 END TEST spdk_dd 00:08:26.438 ************************************ 00:08:26.438 18:25:44 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:26.439 18:25:44 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:26.439 18:25:44 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:08:26.439 18:25:44 -- spdk/autotest.sh@256 -- # timing_exit lib 00:08:26.439 18:25:44 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:26.439 18:25:44 -- common/autotest_common.sh@10 -- # set +x 00:08:26.439 18:25:44 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:08:26.439 18:25:44 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:08:26.439 18:25:44 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:08:26.439 18:25:44 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:08:26.439 18:25:44 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:08:26.439 18:25:44 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:08:26.439 18:25:44 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:26.439 18:25:44 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:26.439 18:25:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.439 18:25:44 -- common/autotest_common.sh@10 -- # set +x 00:08:26.439 ************************************ 00:08:26.439 START TEST nvmf_tcp 00:08:26.439 ************************************ 00:08:26.439 18:25:44 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:26.439 * Looking for test storage... 00:08:26.439 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:26.439 18:25:44 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:26.439 18:25:44 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:08:26.439 18:25:44 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:26.699 18:25:44 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:26.699 18:25:44 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.699 18:25:44 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.699 18:25:44 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.699 18:25:44 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.699 18:25:44 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.699 18:25:44 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.699 18:25:44 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.699 18:25:44 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.699 18:25:44 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.699 18:25:44 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.699 18:25:44 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.699 18:25:44 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:26.699 18:25:44 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:26.699 18:25:44 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.699 18:25:44 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.699 18:25:44 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:26.699 18:25:44 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:26.699 18:25:44 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.699 18:25:44 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:26.699 18:25:44 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.699 18:25:44 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:26.699 18:25:44 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:26.699 18:25:44 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.699 18:25:44 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:26.699 18:25:44 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.699 18:25:44 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.699 18:25:44 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.699 18:25:44 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:26.700 18:25:44 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.700 18:25:44 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:26.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.700 --rc genhtml_branch_coverage=1 00:08:26.700 --rc genhtml_function_coverage=1 00:08:26.700 --rc genhtml_legend=1 00:08:26.700 --rc geninfo_all_blocks=1 00:08:26.700 --rc geninfo_unexecuted_blocks=1 00:08:26.700 00:08:26.700 ' 00:08:26.700 18:25:44 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:26.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.700 --rc genhtml_branch_coverage=1 00:08:26.700 --rc genhtml_function_coverage=1 00:08:26.700 --rc genhtml_legend=1 00:08:26.700 --rc geninfo_all_blocks=1 00:08:26.700 --rc geninfo_unexecuted_blocks=1 00:08:26.700 00:08:26.700 ' 00:08:26.700 18:25:44 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:26.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.700 --rc genhtml_branch_coverage=1 00:08:26.700 --rc genhtml_function_coverage=1 00:08:26.700 --rc genhtml_legend=1 00:08:26.700 --rc geninfo_all_blocks=1 00:08:26.700 --rc geninfo_unexecuted_blocks=1 00:08:26.700 00:08:26.700 ' 00:08:26.700 18:25:44 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:26.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.700 --rc genhtml_branch_coverage=1 00:08:26.700 --rc genhtml_function_coverage=1 00:08:26.700 --rc genhtml_legend=1 00:08:26.700 --rc geninfo_all_blocks=1 00:08:26.700 --rc geninfo_unexecuted_blocks=1 00:08:26.700 00:08:26.700 ' 00:08:26.700 18:25:44 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:26.700 18:25:44 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:26.700 18:25:44 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:26.700 18:25:44 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:26.700 18:25:44 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.700 18:25:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:26.700 ************************************ 00:08:26.700 START TEST nvmf_target_core 00:08:26.700 ************************************ 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:26.700 * Looking for test storage... 00:08:26.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:26.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.700 --rc genhtml_branch_coverage=1 00:08:26.700 --rc genhtml_function_coverage=1 00:08:26.700 --rc genhtml_legend=1 00:08:26.700 --rc geninfo_all_blocks=1 00:08:26.700 --rc geninfo_unexecuted_blocks=1 00:08:26.700 00:08:26.700 ' 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:26.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.700 --rc genhtml_branch_coverage=1 00:08:26.700 --rc genhtml_function_coverage=1 00:08:26.700 --rc genhtml_legend=1 00:08:26.700 --rc geninfo_all_blocks=1 00:08:26.700 --rc geninfo_unexecuted_blocks=1 00:08:26.700 00:08:26.700 ' 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:26.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.700 --rc genhtml_branch_coverage=1 00:08:26.700 --rc genhtml_function_coverage=1 00:08:26.700 --rc genhtml_legend=1 00:08:26.700 --rc geninfo_all_blocks=1 00:08:26.700 --rc geninfo_unexecuted_blocks=1 00:08:26.700 00:08:26.700 ' 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:26.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.700 --rc genhtml_branch_coverage=1 00:08:26.700 --rc genhtml_function_coverage=1 00:08:26.700 --rc genhtml_legend=1 00:08:26.700 --rc geninfo_all_blocks=1 00:08:26.700 --rc geninfo_unexecuted_blocks=1 00:08:26.700 00:08:26.700 ' 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:26.700 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:26.701 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.701 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.701 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.701 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.701 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.701 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.701 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.701 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.701 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.701 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.701 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:08:26.701 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:08:26.701 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.701 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.701 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:26.701 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.701 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:26.701 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:26.962 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:26.962 ************************************ 00:08:26.962 START TEST nvmf_host_management 00:08:26.962 ************************************ 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:26.962 * Looking for test storage... 00:08:26.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.962 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:26.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.963 --rc genhtml_branch_coverage=1 00:08:26.963 --rc genhtml_function_coverage=1 00:08:26.963 --rc genhtml_legend=1 00:08:26.963 --rc geninfo_all_blocks=1 00:08:26.963 --rc geninfo_unexecuted_blocks=1 00:08:26.963 00:08:26.963 ' 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:26.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.963 --rc genhtml_branch_coverage=1 00:08:26.963 --rc genhtml_function_coverage=1 00:08:26.963 --rc genhtml_legend=1 00:08:26.963 --rc geninfo_all_blocks=1 00:08:26.963 --rc geninfo_unexecuted_blocks=1 00:08:26.963 00:08:26.963 ' 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:26.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.963 --rc genhtml_branch_coverage=1 00:08:26.963 --rc genhtml_function_coverage=1 00:08:26.963 --rc genhtml_legend=1 00:08:26.963 --rc geninfo_all_blocks=1 00:08:26.963 --rc geninfo_unexecuted_blocks=1 00:08:26.963 00:08:26.963 ' 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:26.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.963 --rc genhtml_branch_coverage=1 00:08:26.963 --rc genhtml_function_coverage=1 00:08:26.963 --rc genhtml_legend=1 00:08:26.963 --rc geninfo_all_blocks=1 00:08:26.963 --rc geninfo_unexecuted_blocks=1 00:08:26.963 00:08:26.963 ' 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:26.963 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:26.963 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:26.964 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:26.964 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:26.964 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:26.964 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:26.964 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:26.964 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:26.964 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:26.964 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:26.964 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:26.964 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:26.964 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:26.964 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:26.964 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:26.964 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:26.964 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:26.964 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:26.964 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:27.223 Cannot find device "nvmf_init_br" 00:08:27.223 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:27.223 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:27.223 Cannot find device "nvmf_init_br2" 00:08:27.223 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:27.223 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:27.223 Cannot find device "nvmf_tgt_br" 00:08:27.223 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:08:27.223 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:27.223 Cannot find device "nvmf_tgt_br2" 00:08:27.223 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:08:27.223 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:27.223 Cannot find device "nvmf_init_br" 00:08:27.223 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:08:27.223 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:27.223 Cannot find device "nvmf_init_br2" 00:08:27.223 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:08:27.223 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:27.223 Cannot find device "nvmf_tgt_br" 00:08:27.223 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:08:27.223 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:27.223 Cannot find device "nvmf_tgt_br2" 00:08:27.223 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:08:27.223 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:27.223 Cannot find device "nvmf_br" 00:08:27.223 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:08:27.223 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:27.223 Cannot find device "nvmf_init_if" 00:08:27.223 18:25:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:08:27.223 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:27.223 Cannot find device "nvmf_init_if2" 00:08:27.223 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:08:27.223 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:27.223 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:27.223 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:08:27.223 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:27.223 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:27.223 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:08:27.223 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:27.223 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:27.223 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:27.223 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:27.223 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:27.223 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:27.223 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:27.223 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:27.223 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:27.223 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:27.223 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:27.223 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:27.223 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:27.223 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:27.223 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:27.223 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:27.223 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:27.223 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:27.223 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:27.482 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:27.482 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:08:27.482 00:08:27.482 --- 10.0.0.3 ping statistics --- 00:08:27.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.482 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:27.482 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:27.482 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:08:27.482 00:08:27.482 --- 10.0.0.4 ping statistics --- 00:08:27.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.482 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:27.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:08:27.482 00:08:27.482 --- 10.0.0.1 ping statistics --- 00:08:27.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.482 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:27.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:08:27.482 00:08:27.482 --- 10.0.0.2 ping statistics --- 00:08:27.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.482 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # return 0 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=74421 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 74421 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 74421 ']' 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.482 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:27.740 [2024-12-08 18:25:45.449636] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:27.740 [2024-12-08 18:25:45.449745] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.740 [2024-12-08 18:25:45.589470] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:27.740 [2024-12-08 18:25:45.654915] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.740 [2024-12-08 18:25:45.654995] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.740 [2024-12-08 18:25:45.655021] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.740 [2024-12-08 18:25:45.655029] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.740 [2024-12-08 18:25:45.655036] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.740 [2024-12-08 18:25:45.655369] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:27.740 [2024-12-08 18:25:45.655477] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.740 [2024-12-08 18:25:45.655480] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:08:27.740 [2024-12-08 18:25:45.655227] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.998 [2024-12-08 18:25:45.707482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.998 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:27.998 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:27.998 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:27.998 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:27.998 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:27.998 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.998 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:27.998 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.998 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:27.998 [2024-12-08 18:25:45.821988] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.998 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.998 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:27.998 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:27.998 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:27.998 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:27.998 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:27.998 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:27.998 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.998 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:27.998 Malloc0 00:08:27.998 [2024-12-08 18:25:45.891371] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:27.998 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.998 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:27.998 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:27.998 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:28.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:28.257 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=74468 00:08:28.257 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 74468 /var/tmp/bdevperf.sock 00:08:28.257 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 74468 ']' 00:08:28.257 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:28.257 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:28.257 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:28.257 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:28.257 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:28.257 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:28.257 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:28.257 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:08:28.257 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:08:28.257 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:28.257 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:28.257 { 00:08:28.257 "params": { 00:08:28.257 "name": "Nvme$subsystem", 00:08:28.257 "trtype": "$TEST_TRANSPORT", 00:08:28.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:28.257 "adrfam": "ipv4", 00:08:28.257 "trsvcid": "$NVMF_PORT", 00:08:28.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:28.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:28.257 "hdgst": ${hdgst:-false}, 00:08:28.257 "ddgst": ${ddgst:-false} 00:08:28.257 }, 00:08:28.257 "method": "bdev_nvme_attach_controller" 00:08:28.257 } 00:08:28.257 EOF 00:08:28.257 )") 00:08:28.257 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:08:28.257 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:08:28.257 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:08:28.257 18:25:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:28.257 "params": { 00:08:28.257 "name": "Nvme0", 00:08:28.257 "trtype": "tcp", 00:08:28.257 "traddr": "10.0.0.3", 00:08:28.257 "adrfam": "ipv4", 00:08:28.257 "trsvcid": "4420", 00:08:28.257 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:28.257 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:28.257 "hdgst": false, 00:08:28.257 "ddgst": false 00:08:28.257 }, 00:08:28.257 "method": "bdev_nvme_attach_controller" 00:08:28.257 }' 00:08:28.257 [2024-12-08 18:25:46.003439] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:28.257 [2024-12-08 18:25:46.003567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74468 ] 00:08:28.257 [2024-12-08 18:25:46.143951] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.515 [2024-12-08 18:25:46.267160] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.516 [2024-12-08 18:25:46.349650] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:28.774 Running I/O for 10 seconds... 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.344 18:25:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:29.344 task offset: 0 on job bdev=Nvme0n1 fails 00:08:29.344 00:08:29.344 Latency(us) 00:08:29.344 [2024-12-08T18:25:47.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.344 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:29.344 Job: Nvme0n1 ended in about 0.63 seconds with error 00:08:29.344 Verification LBA range: start 0x0 length 0x400 00:08:29.344 Nvme0n1 : 0.63 1612.67 100.79 100.79 0.00 36473.26 2204.39 34555.35 00:08:29.344 [2024-12-08T18:25:47.274Z] =================================================================================================================== 00:08:29.344 [2024-12-08T18:25:47.274Z] Total : 1612.67 100.79 100.79 0.00 36473.26 2204.39 34555.35 00:08:29.344 [2024-12-08 18:25:47.120428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.344 [2024-12-08 18:25:47.120516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.344 [2024-12-08 18:25:47.120548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.344 [2024-12-08 18:25:47.120558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.344 [2024-12-08 18:25:47.120568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.344 [2024-12-08 18:25:47.120577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.344 [2024-12-08 18:25:47.120588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.344 [2024-12-08 18:25:47.120597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.344 [2024-12-08 18:25:47.120608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.344 [2024-12-08 18:25:47.120616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.344 [2024-12-08 18:25:47.120626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.344 [2024-12-08 18:25:47.120635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.344 [2024-12-08 18:25:47.120646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.344 [2024-12-08 18:25:47.120654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.344 [2024-12-08 18:25:47.120665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.120673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.120683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.120692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.120703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.120711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.120722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.120730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.120740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.120748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.120768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.120777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.120803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.120812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.120822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.120830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.120841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.120849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.120870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.120879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.120889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.120897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.120907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.120915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.120925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.120933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.120942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.120950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.120960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.120967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.120976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.120984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.120994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.121001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.121011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.121019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.121029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.121037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.121048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.121056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.121066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.121074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.121084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.121091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.121101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.121110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.121119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.121127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.121136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.121144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.121158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.121167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.121177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.121185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.121194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.121202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.121212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.121220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.121230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.121238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.121247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.121255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.121265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.121273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.121283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.121291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.121300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.121308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.121317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.121325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.121334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.121342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.121351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.121359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.121368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.121376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.121386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.121394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.121412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.121426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.121436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.121444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.121467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.345 [2024-12-08 18:25:47.121476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.345 [2024-12-08 18:25:47.121485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.346 [2024-12-08 18:25:47.121493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.346 [2024-12-08 18:25:47.121511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.346 [2024-12-08 18:25:47.121520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.346 [2024-12-08 18:25:47.121529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.346 [2024-12-08 18:25:47.121537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.346 [2024-12-08 18:25:47.121547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.346 [2024-12-08 18:25:47.121555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.346 [2024-12-08 18:25:47.121564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.346 [2024-12-08 18:25:47.121572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.346 [2024-12-08 18:25:47.121581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.346 [2024-12-08 18:25:47.121589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.346 [2024-12-08 18:25:47.121598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.346 [2024-12-08 18:25:47.121606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.346 [2024-12-08 18:25:47.121615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.346 [2024-12-08 18:25:47.121623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.346 [2024-12-08 18:25:47.121633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.346 [2024-12-08 18:25:47.121641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.346 [2024-12-08 18:25:47.121651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.346 [2024-12-08 18:25:47.121660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.346 [2024-12-08 18:25:47.121669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.346 [2024-12-08 18:25:47.121677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.346 [2024-12-08 18:25:47.121686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.346 [2024-12-08 18:25:47.121694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.346 [2024-12-08 18:25:47.121703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.346 [2024-12-08 18:25:47.121719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.346 [2024-12-08 18:25:47.121728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.346 [2024-12-08 18:25:47.121736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.346 [2024-12-08 18:25:47.121747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:29.346 [2024-12-08 18:25:47.121755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.346 [2024-12-08 18:25:47.121789] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5370 is same with the state(6) to be set 00:08:29.346 [2024-12-08 18:25:47.121904] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fe5370 was disconnected and freed. reset controller. 00:08:29.346 [2024-12-08 18:25:47.122067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:29.346 [2024-12-08 18:25:47.122083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.346 [2024-12-08 18:25:47.122094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:29.346 [2024-12-08 18:25:47.122101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.346 [2024-12-08 18:25:47.122111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:29.346 [2024-12-08 18:25:47.122118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.346 [2024-12-08 18:25:47.122127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:29.346 [2024-12-08 18:25:47.122135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.346 [2024-12-08 18:25:47.122142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcd860 is same with the state(6) to be set 00:08:29.346 [2024-12-08 18:25:47.123174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:29.346 [2024-12-08 18:25:47.125328] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:29.346 [2024-12-08 18:25:47.125358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dcd860 (9): Bad file descriptor 00:08:29.346 [2024-12-08 18:25:47.133574] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:30.284 18:25:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 74468 00:08:30.284 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (74468) - No such process 00:08:30.284 18:25:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:30.284 18:25:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:30.284 18:25:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:30.284 18:25:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:30.284 18:25:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:08:30.284 18:25:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:08:30.284 18:25:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:30.284 18:25:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:30.284 { 00:08:30.284 "params": { 00:08:30.284 "name": "Nvme$subsystem", 00:08:30.284 "trtype": "$TEST_TRANSPORT", 00:08:30.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:30.284 "adrfam": "ipv4", 00:08:30.284 "trsvcid": "$NVMF_PORT", 00:08:30.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:30.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:30.284 "hdgst": ${hdgst:-false}, 00:08:30.284 "ddgst": ${ddgst:-false} 00:08:30.284 }, 00:08:30.284 "method": "bdev_nvme_attach_controller" 00:08:30.284 } 00:08:30.284 EOF 00:08:30.284 )") 00:08:30.284 18:25:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:08:30.284 18:25:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:08:30.284 18:25:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:08:30.284 18:25:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:30.284 "params": { 00:08:30.284 "name": "Nvme0", 00:08:30.284 "trtype": "tcp", 00:08:30.284 "traddr": "10.0.0.3", 00:08:30.284 "adrfam": "ipv4", 00:08:30.284 "trsvcid": "4420", 00:08:30.284 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:30.284 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:30.284 "hdgst": false, 00:08:30.284 "ddgst": false 00:08:30.284 }, 00:08:30.284 "method": "bdev_nvme_attach_controller" 00:08:30.284 }' 00:08:30.284 [2024-12-08 18:25:48.184030] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:30.284 [2024-12-08 18:25:48.184132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74506 ] 00:08:30.543 [2024-12-08 18:25:48.325799] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.543 [2024-12-08 18:25:48.416577] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.803 [2024-12-08 18:25:48.498254] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.803 Running I/O for 1 seconds... 00:08:31.740 1600.00 IOPS, 100.00 MiB/s 00:08:31.740 Latency(us) 00:08:31.740 [2024-12-08T18:25:49.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.740 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:31.740 Verification LBA range: start 0x0 length 0x400 00:08:31.740 Nvme0n1 : 1.02 1637.31 102.33 0.00 0.00 38381.14 4021.53 35985.22 00:08:31.740 [2024-12-08T18:25:49.670Z] =================================================================================================================== 00:08:31.740 [2024-12-08T18:25:49.670Z] Total : 1637.31 102.33 0.00 0.00 38381.14 4021.53 35985.22 00:08:31.998 18:25:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:31.998 18:25:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:31.998 18:25:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:31.998 18:25:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:31.998 18:25:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:31.998 18:25:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:31.998 18:25:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:32.257 18:25:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:32.257 18:25:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:32.257 18:25:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:32.257 18:25:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:32.257 rmmod nvme_tcp 00:08:32.257 rmmod nvme_fabrics 00:08:32.257 rmmod nvme_keyring 00:08:32.257 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:32.257 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:32.257 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:32.257 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 74421 ']' 00:08:32.257 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 74421 00:08:32.257 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 74421 ']' 00:08:32.257 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 74421 00:08:32.257 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:32.257 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:32.257 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74421 00:08:32.257 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:32.257 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:32.257 killing process with pid 74421 00:08:32.257 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74421' 00:08:32.257 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 74421 00:08:32.257 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 74421 00:08:32.516 [2024-12-08 18:25:50.276487] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:32.516 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:32.516 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:32.516 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:32.516 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:32.516 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:08:32.516 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:08:32.516 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:32.516 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:32.516 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:32.516 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:32.516 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:32.516 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:32.516 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:32.516 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:32.516 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:32.516 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:32.516 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:32.516 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:32.516 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:32.776 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:32.776 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:32.776 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:32.776 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:32.776 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.776 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.776 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.776 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:08:32.776 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:32.776 00:08:32.776 real 0m5.909s 00:08:32.776 user 0m21.162s 00:08:32.776 sys 0m1.816s 00:08:32.776 ************************************ 00:08:32.776 END TEST nvmf_host_management 00:08:32.776 ************************************ 00:08:32.776 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.776 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.776 18:25:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:32.776 18:25:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:32.776 18:25:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.776 18:25:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:32.776 ************************************ 00:08:32.776 START TEST nvmf_lvol 00:08:32.776 ************************************ 00:08:32.776 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:32.776 * Looking for test storage... 00:08:32.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:32.776 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:32.776 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:32.776 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:33.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.037 --rc genhtml_branch_coverage=1 00:08:33.037 --rc genhtml_function_coverage=1 00:08:33.037 --rc genhtml_legend=1 00:08:33.037 --rc geninfo_all_blocks=1 00:08:33.037 --rc geninfo_unexecuted_blocks=1 00:08:33.037 00:08:33.037 ' 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:33.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.037 --rc genhtml_branch_coverage=1 00:08:33.037 --rc genhtml_function_coverage=1 00:08:33.037 --rc genhtml_legend=1 00:08:33.037 --rc geninfo_all_blocks=1 00:08:33.037 --rc geninfo_unexecuted_blocks=1 00:08:33.037 00:08:33.037 ' 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:33.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.037 --rc genhtml_branch_coverage=1 00:08:33.037 --rc genhtml_function_coverage=1 00:08:33.037 --rc genhtml_legend=1 00:08:33.037 --rc geninfo_all_blocks=1 00:08:33.037 --rc geninfo_unexecuted_blocks=1 00:08:33.037 00:08:33.037 ' 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:33.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.037 --rc genhtml_branch_coverage=1 00:08:33.037 --rc genhtml_function_coverage=1 00:08:33.037 --rc genhtml_legend=1 00:08:33.037 --rc geninfo_all_blocks=1 00:08:33.037 --rc geninfo_unexecuted_blocks=1 00:08:33.037 00:08:33.037 ' 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.037 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:33.038 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:33.038 Cannot find device "nvmf_init_br" 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:33.038 Cannot find device "nvmf_init_br2" 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:33.038 Cannot find device "nvmf_tgt_br" 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:33.038 Cannot find device "nvmf_tgt_br2" 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:33.038 Cannot find device "nvmf_init_br" 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:33.038 Cannot find device "nvmf_init_br2" 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:33.038 Cannot find device "nvmf_tgt_br" 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:33.038 Cannot find device "nvmf_tgt_br2" 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:33.038 Cannot find device "nvmf_br" 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:33.038 Cannot find device "nvmf_init_if" 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:33.038 Cannot find device "nvmf_init_if2" 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:33.038 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:33.038 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:33.038 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:33.298 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:33.298 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:33.298 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:33.298 18:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:33.298 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:33.298 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:08:33.298 00:08:33.298 --- 10.0.0.3 ping statistics --- 00:08:33.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.298 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:33.298 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:33.298 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:08:33.298 00:08:33.298 --- 10.0.0.4 ping statistics --- 00:08:33.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.298 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:33.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:08:33.298 00:08:33.298 --- 10.0.0.1 ping statistics --- 00:08:33.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.298 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:33.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:08:33.298 00:08:33.298 --- 10.0.0.2 ping statistics --- 00:08:33.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.298 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # return 0 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:33.298 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:33.557 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=74776 00:08:33.557 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:33.557 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 74776 00:08:33.557 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 74776 ']' 00:08:33.557 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.558 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:33.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.558 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.558 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:33.558 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:33.558 [2024-12-08 18:25:51.272586] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:33.558 [2024-12-08 18:25:51.272692] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.558 [2024-12-08 18:25:51.405283] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:33.558 [2024-12-08 18:25:51.461978] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.558 [2024-12-08 18:25:51.462048] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.558 [2024-12-08 18:25:51.462080] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:33.558 [2024-12-08 18:25:51.462091] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:33.558 [2024-12-08 18:25:51.462100] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.558 [2024-12-08 18:25:51.462390] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.558 [2024-12-08 18:25:51.462955] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.558 [2024-12-08 18:25:51.462962] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.817 [2024-12-08 18:25:51.540900] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:33.817 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:33.817 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:33.817 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:33.817 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:33.817 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:33.817 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.817 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:34.076 [2024-12-08 18:25:51.911317] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:34.076 18:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:34.335 18:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:34.335 18:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:34.594 18:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:34.594 18:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:34.854 18:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:35.113 18:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=6afe1514-8f72-4af3-8b39-c0293c2c7666 00:08:35.113 18:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6afe1514-8f72-4af3-8b39-c0293c2c7666 lvol 20 00:08:35.373 18:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=cfb9e02f-a074-4b46-8b22-4aeb86e58ccf 00:08:35.373 18:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:35.633 18:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cfb9e02f-a074-4b46-8b22-4aeb86e58ccf 00:08:35.893 18:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:35.893 [2024-12-08 18:25:53.813918] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:36.152 18:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:36.152 18:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=74844 00:08:36.152 18:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:36.152 18:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:37.601 18:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot cfb9e02f-a074-4b46-8b22-4aeb86e58ccf MY_SNAPSHOT 00:08:37.601 18:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ee4fa1c6-277e-43c3-8174-9a18f436328a 00:08:37.601 18:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize cfb9e02f-a074-4b46-8b22-4aeb86e58ccf 30 00:08:37.872 18:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone ee4fa1c6-277e-43c3-8174-9a18f436328a MY_CLONE 00:08:38.131 18:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=60c8ce5a-f495-4ae9-911c-a68123bfd67f 00:08:38.131 18:25:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 60c8ce5a-f495-4ae9-911c-a68123bfd67f 00:08:38.698 18:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 74844 00:08:46.842 Initializing NVMe Controllers 00:08:46.842 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:08:46.842 Controller IO queue size 128, less than required. 00:08:46.842 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:46.842 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:46.842 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:46.842 Initialization complete. Launching workers. 00:08:46.842 ======================================================== 00:08:46.842 Latency(us) 00:08:46.842 Device Information : IOPS MiB/s Average min max 00:08:46.842 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10427.20 40.73 12283.73 1832.88 70982.68 00:08:46.842 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10443.60 40.80 12269.62 2498.89 50341.13 00:08:46.842 ======================================================== 00:08:46.842 Total : 20870.80 81.53 12276.67 1832.88 70982.68 00:08:46.842 00:08:46.842 18:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:46.842 18:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete cfb9e02f-a074-4b46-8b22-4aeb86e58ccf 00:08:47.101 18:26:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6afe1514-8f72-4af3-8b39-c0293c2c7666 00:08:47.365 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:47.365 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:47.365 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:47.365 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:47.365 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:47.365 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:47.365 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:47.365 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:47.365 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:47.365 rmmod nvme_tcp 00:08:47.365 rmmod nvme_fabrics 00:08:47.365 rmmod nvme_keyring 00:08:47.365 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:47.365 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:47.365 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:47.365 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 74776 ']' 00:08:47.365 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 74776 00:08:47.365 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 74776 ']' 00:08:47.365 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 74776 00:08:47.365 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:47.365 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:47.365 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74776 00:08:47.365 killing process with pid 74776 00:08:47.365 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:47.365 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:47.365 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74776' 00:08:47.365 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 74776 00:08:47.365 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 74776 00:08:47.624 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:47.624 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:47.624 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:47.624 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:47.624 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:08:47.624 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:47.624 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:08:47.624 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:47.624 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:47.624 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:47.624 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:47.624 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:47.624 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:47.883 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:47.883 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:47.883 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:47.883 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:47.883 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:47.883 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:47.883 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:47.883 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:47.883 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:47.883 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:47.883 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.883 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.883 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.883 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:08:47.883 00:08:47.883 real 0m15.125s 00:08:47.883 user 1m2.796s 00:08:47.883 sys 0m3.946s 00:08:47.883 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.883 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:47.883 ************************************ 00:08:47.883 END TEST nvmf_lvol 00:08:47.883 ************************************ 00:08:47.883 18:26:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:47.883 18:26:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:47.883 18:26:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.883 18:26:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:47.883 ************************************ 00:08:47.883 START TEST nvmf_lvs_grow 00:08:47.883 ************************************ 00:08:47.883 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:48.142 * Looking for test storage... 00:08:48.143 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:48.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.143 --rc genhtml_branch_coverage=1 00:08:48.143 --rc genhtml_function_coverage=1 00:08:48.143 --rc genhtml_legend=1 00:08:48.143 --rc geninfo_all_blocks=1 00:08:48.143 --rc geninfo_unexecuted_blocks=1 00:08:48.143 00:08:48.143 ' 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:48.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.143 --rc genhtml_branch_coverage=1 00:08:48.143 --rc genhtml_function_coverage=1 00:08:48.143 --rc genhtml_legend=1 00:08:48.143 --rc geninfo_all_blocks=1 00:08:48.143 --rc geninfo_unexecuted_blocks=1 00:08:48.143 00:08:48.143 ' 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:48.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.143 --rc genhtml_branch_coverage=1 00:08:48.143 --rc genhtml_function_coverage=1 00:08:48.143 --rc genhtml_legend=1 00:08:48.143 --rc geninfo_all_blocks=1 00:08:48.143 --rc geninfo_unexecuted_blocks=1 00:08:48.143 00:08:48.143 ' 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:48.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.143 --rc genhtml_branch_coverage=1 00:08:48.143 --rc genhtml_function_coverage=1 00:08:48.143 --rc genhtml_legend=1 00:08:48.143 --rc geninfo_all_blocks=1 00:08:48.143 --rc geninfo_unexecuted_blocks=1 00:08:48.143 00:08:48.143 ' 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:48.143 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:48.143 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:48.144 18:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:48.144 Cannot find device "nvmf_init_br" 00:08:48.144 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:48.144 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:48.144 Cannot find device "nvmf_init_br2" 00:08:48.144 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:48.144 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:48.144 Cannot find device "nvmf_tgt_br" 00:08:48.144 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:08:48.144 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:48.144 Cannot find device "nvmf_tgt_br2" 00:08:48.144 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:08:48.144 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:48.144 Cannot find device "nvmf_init_br" 00:08:48.144 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:08:48.144 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:48.144 Cannot find device "nvmf_init_br2" 00:08:48.144 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:08:48.144 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:48.403 Cannot find device "nvmf_tgt_br" 00:08:48.403 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:08:48.403 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:48.403 Cannot find device "nvmf_tgt_br2" 00:08:48.403 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:08:48.403 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:48.403 Cannot find device "nvmf_br" 00:08:48.403 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:08:48.403 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:48.403 Cannot find device "nvmf_init_if" 00:08:48.403 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:08:48.403 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:48.403 Cannot find device "nvmf_init_if2" 00:08:48.403 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:08:48.403 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:48.403 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:48.403 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:08:48.403 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:48.403 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:48.403 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:08:48.403 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:48.403 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:48.403 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:48.403 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:48.403 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:48.403 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:48.403 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:48.403 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:48.403 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:48.404 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:48.404 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:48.404 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:48.404 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:48.404 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:48.404 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:48.404 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:48.404 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:48.404 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:48.404 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:48.404 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:48.404 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:48.404 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:48.404 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:48.404 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:48.404 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:48.404 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:48.664 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:48.664 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:08:48.664 00:08:48.664 --- 10.0.0.3 ping statistics --- 00:08:48.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.664 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:48.664 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:48.664 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:08:48.664 00:08:48.664 --- 10.0.0.4 ping statistics --- 00:08:48.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.664 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:48.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:08:48.664 00:08:48.664 --- 10.0.0.1 ping statistics --- 00:08:48.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.664 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:48.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:08:48.664 00:08:48.664 --- 10.0.0.2 ping statistics --- 00:08:48.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.664 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # return 0 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=75227 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 75227 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 75227 ']' 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:48.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:48.664 18:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:48.664 [2024-12-08 18:26:06.445673] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:48.664 [2024-12-08 18:26:06.445787] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.664 [2024-12-08 18:26:06.580917] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.923 [2024-12-08 18:26:06.645100] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.923 [2024-12-08 18:26:06.645184] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.923 [2024-12-08 18:26:06.645212] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.923 [2024-12-08 18:26:06.645220] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.924 [2024-12-08 18:26:06.645227] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.924 [2024-12-08 18:26:06.645252] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.924 [2024-12-08 18:26:06.694935] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:49.491 18:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:49.491 18:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:49.491 18:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:49.491 18:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:49.491 18:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:49.491 18:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.491 18:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:49.750 [2024-12-08 18:26:07.614913] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.750 18:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:49.750 18:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:49.750 18:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.751 18:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:49.751 ************************************ 00:08:49.751 START TEST lvs_grow_clean 00:08:49.751 ************************************ 00:08:49.751 18:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:49.751 18:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:49.751 18:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:49.751 18:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:49.751 18:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:49.751 18:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:49.751 18:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:49.751 18:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:49.751 18:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:49.751 18:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:50.319 18:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:50.319 18:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:50.578 18:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=554d233e-f1fb-44ed-b1b6-c16d1d4c20d1 00:08:50.578 18:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 554d233e-f1fb-44ed-b1b6-c16d1d4c20d1 00:08:50.578 18:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:50.578 18:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:50.578 18:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:50.578 18:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 554d233e-f1fb-44ed-b1b6-c16d1d4c20d1 lvol 150 00:08:50.837 18:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c5d0e6fa-3cd5-4d49-aec1-249d80faf762 00:08:50.837 18:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:50.837 18:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:51.096 [2024-12-08 18:26:08.970339] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:51.096 [2024-12-08 18:26:08.970474] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:51.096 true 00:08:51.096 18:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 554d233e-f1fb-44ed-b1b6-c16d1d4c20d1 00:08:51.096 18:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:51.354 18:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:51.354 18:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:51.611 18:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c5d0e6fa-3cd5-4d49-aec1-249d80faf762 00:08:51.869 18:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:52.127 [2024-12-08 18:26:09.962932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:52.127 18:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:52.385 18:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:52.385 18:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=75304 00:08:52.385 18:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:52.385 18:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 75304 /var/tmp/bdevperf.sock 00:08:52.385 18:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 75304 ']' 00:08:52.385 18:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:52.385 18:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:52.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:52.385 18:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:52.385 18:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:52.385 18:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:52.385 [2024-12-08 18:26:10.224083] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:52.385 [2024-12-08 18:26:10.224185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75304 ] 00:08:52.644 [2024-12-08 18:26:10.359313] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.644 [2024-12-08 18:26:10.439147] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.644 [2024-12-08 18:26:10.494205] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:53.583 18:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:53.583 18:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:53.583 18:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:53.583 Nvme0n1 00:08:53.583 18:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:53.842 [ 00:08:53.842 { 00:08:53.842 "name": "Nvme0n1", 00:08:53.842 "aliases": [ 00:08:53.842 "c5d0e6fa-3cd5-4d49-aec1-249d80faf762" 00:08:53.842 ], 00:08:53.842 "product_name": "NVMe disk", 00:08:53.842 "block_size": 4096, 00:08:53.842 "num_blocks": 38912, 00:08:53.842 "uuid": "c5d0e6fa-3cd5-4d49-aec1-249d80faf762", 00:08:53.842 "numa_id": -1, 00:08:53.842 "assigned_rate_limits": { 00:08:53.842 "rw_ios_per_sec": 0, 00:08:53.842 "rw_mbytes_per_sec": 0, 00:08:53.842 "r_mbytes_per_sec": 0, 00:08:53.842 "w_mbytes_per_sec": 0 00:08:53.842 }, 00:08:53.842 "claimed": false, 00:08:53.842 "zoned": false, 00:08:53.842 "supported_io_types": { 00:08:53.842 "read": true, 00:08:53.842 "write": true, 00:08:53.842 "unmap": true, 00:08:53.842 "flush": true, 00:08:53.842 "reset": true, 00:08:53.842 "nvme_admin": true, 00:08:53.842 "nvme_io": true, 00:08:53.842 "nvme_io_md": false, 00:08:53.842 "write_zeroes": true, 00:08:53.842 "zcopy": false, 00:08:53.842 "get_zone_info": false, 00:08:53.842 "zone_management": false, 00:08:53.842 "zone_append": false, 00:08:53.842 "compare": true, 00:08:53.842 "compare_and_write": true, 00:08:53.842 "abort": true, 00:08:53.842 "seek_hole": false, 00:08:53.842 "seek_data": false, 00:08:53.842 "copy": true, 00:08:53.842 "nvme_iov_md": false 00:08:53.842 }, 00:08:53.842 "memory_domains": [ 00:08:53.842 { 00:08:53.842 "dma_device_id": "system", 00:08:53.842 "dma_device_type": 1 00:08:53.842 } 00:08:53.842 ], 00:08:53.842 "driver_specific": { 00:08:53.842 "nvme": [ 00:08:53.842 { 00:08:53.842 "trid": { 00:08:53.842 "trtype": "TCP", 00:08:53.842 "adrfam": "IPv4", 00:08:53.842 "traddr": "10.0.0.3", 00:08:53.842 "trsvcid": "4420", 00:08:53.842 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:53.842 }, 00:08:53.842 "ctrlr_data": { 00:08:53.842 "cntlid": 1, 00:08:53.842 "vendor_id": "0x8086", 00:08:53.842 "model_number": "SPDK bdev Controller", 00:08:53.842 "serial_number": "SPDK0", 00:08:53.842 "firmware_revision": "24.09.1", 00:08:53.842 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:53.842 "oacs": { 00:08:53.842 "security": 0, 00:08:53.842 "format": 0, 00:08:53.842 "firmware": 0, 00:08:53.842 "ns_manage": 0 00:08:53.842 }, 00:08:53.842 "multi_ctrlr": true, 00:08:53.842 "ana_reporting": false 00:08:53.842 }, 00:08:53.842 "vs": { 00:08:53.842 "nvme_version": "1.3" 00:08:53.842 }, 00:08:53.842 "ns_data": { 00:08:53.842 "id": 1, 00:08:53.842 "can_share": true 00:08:53.842 } 00:08:53.842 } 00:08:53.842 ], 00:08:53.842 "mp_policy": "active_passive" 00:08:53.842 } 00:08:53.842 } 00:08:53.842 ] 00:08:53.842 18:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=75333 00:08:53.842 18:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:53.842 18:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:54.134 Running I/O for 10 seconds... 00:08:55.106 Latency(us) 00:08:55.106 [2024-12-08T18:26:13.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.106 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.106 Nvme0n1 : 1.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:08:55.106 [2024-12-08T18:26:13.036Z] =================================================================================================================== 00:08:55.106 [2024-12-08T18:26:13.036Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:08:55.106 00:08:56.041 18:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 554d233e-f1fb-44ed-b1b6-c16d1d4c20d1 00:08:56.041 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.041 Nvme0n1 : 2.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:08:56.041 [2024-12-08T18:26:13.971Z] =================================================================================================================== 00:08:56.041 [2024-12-08T18:26:13.971Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:08:56.041 00:08:56.041 true 00:08:56.392 18:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 554d233e-f1fb-44ed-b1b6-c16d1d4c20d1 00:08:56.392 18:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:56.651 18:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:56.651 18:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:56.651 18:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 75333 00:08:56.910 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.910 Nvme0n1 : 3.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:08:56.910 [2024-12-08T18:26:14.840Z] =================================================================================================================== 00:08:56.910 [2024-12-08T18:26:14.840Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:08:56.910 00:08:58.286 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.286 Nvme0n1 : 4.00 6699.25 26.17 0.00 0.00 0.00 0.00 0.00 00:08:58.286 [2024-12-08T18:26:16.216Z] =================================================================================================================== 00:08:58.286 [2024-12-08T18:26:16.216Z] Total : 6699.25 26.17 0.00 0.00 0.00 0.00 0.00 00:08:58.286 00:08:59.221 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.221 Nvme0n1 : 5.00 6654.80 26.00 0.00 0.00 0.00 0.00 0.00 00:08:59.221 [2024-12-08T18:26:17.151Z] =================================================================================================================== 00:08:59.221 [2024-12-08T18:26:17.151Z] Total : 6654.80 26.00 0.00 0.00 0.00 0.00 0.00 00:08:59.221 00:09:00.157 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.157 Nvme0n1 : 6.00 6625.17 25.88 0.00 0.00 0.00 0.00 0.00 00:09:00.157 [2024-12-08T18:26:18.087Z] =================================================================================================================== 00:09:00.157 [2024-12-08T18:26:18.087Z] Total : 6625.17 25.88 0.00 0.00 0.00 0.00 0.00 00:09:00.157 00:09:01.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.093 Nvme0n1 : 7.00 6585.86 25.73 0.00 0.00 0.00 0.00 0.00 00:09:01.093 [2024-12-08T18:26:19.023Z] =================================================================================================================== 00:09:01.093 [2024-12-08T18:26:19.023Z] Total : 6585.86 25.73 0.00 0.00 0.00 0.00 0.00 00:09:01.093 00:09:02.030 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.030 Nvme0n1 : 8.00 6588.12 25.73 0.00 0.00 0.00 0.00 0.00 00:09:02.030 [2024-12-08T18:26:19.960Z] =================================================================================================================== 00:09:02.030 [2024-12-08T18:26:19.960Z] Total : 6588.12 25.73 0.00 0.00 0.00 0.00 0.00 00:09:02.030 00:09:02.969 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.969 Nvme0n1 : 9.00 6589.89 25.74 0.00 0.00 0.00 0.00 0.00 00:09:02.969 [2024-12-08T18:26:20.899Z] =================================================================================================================== 00:09:02.969 [2024-12-08T18:26:20.899Z] Total : 6589.89 25.74 0.00 0.00 0.00 0.00 0.00 00:09:02.969 00:09:03.907 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.907 Nvme0n1 : 10.00 6578.60 25.70 0.00 0.00 0.00 0.00 0.00 00:09:03.907 [2024-12-08T18:26:21.837Z] =================================================================================================================== 00:09:03.907 [2024-12-08T18:26:21.837Z] Total : 6578.60 25.70 0.00 0.00 0.00 0.00 0.00 00:09:03.907 00:09:03.907 00:09:03.907 Latency(us) 00:09:03.907 [2024-12-08T18:26:21.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.907 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.907 Nvme0n1 : 10.01 6586.55 25.73 0.00 0.00 19428.91 15728.64 42181.35 00:09:03.907 [2024-12-08T18:26:21.837Z] =================================================================================================================== 00:09:03.907 [2024-12-08T18:26:21.837Z] Total : 6586.55 25.73 0.00 0.00 19428.91 15728.64 42181.35 00:09:03.907 { 00:09:03.907 "results": [ 00:09:03.907 { 00:09:03.907 "job": "Nvme0n1", 00:09:03.907 "core_mask": "0x2", 00:09:03.907 "workload": "randwrite", 00:09:03.907 "status": "finished", 00:09:03.907 "queue_depth": 128, 00:09:03.907 "io_size": 4096, 00:09:03.907 "runtime": 10.007366, 00:09:03.907 "iops": 6586.548348486504, 00:09:03.907 "mibps": 25.728704486275408, 00:09:03.907 "io_failed": 0, 00:09:03.907 "io_timeout": 0, 00:09:03.907 "avg_latency_us": 19428.913841782818, 00:09:03.907 "min_latency_us": 15728.64, 00:09:03.907 "max_latency_us": 42181.35272727273 00:09:03.907 } 00:09:03.907 ], 00:09:03.907 "core_count": 1 00:09:03.907 } 00:09:04.167 18:26:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 75304 00:09:04.167 18:26:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 75304 ']' 00:09:04.167 18:26:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 75304 00:09:04.167 18:26:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:04.167 18:26:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:04.167 18:26:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75304 00:09:04.167 killing process with pid 75304 00:09:04.167 Received shutdown signal, test time was about 10.000000 seconds 00:09:04.167 00:09:04.167 Latency(us) 00:09:04.167 [2024-12-08T18:26:22.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.167 [2024-12-08T18:26:22.097Z] =================================================================================================================== 00:09:04.167 [2024-12-08T18:26:22.097Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:04.167 18:26:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:04.167 18:26:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:04.167 18:26:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75304' 00:09:04.167 18:26:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 75304 00:09:04.167 18:26:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 75304 00:09:04.167 18:26:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:04.427 18:26:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:04.687 18:26:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 554d233e-f1fb-44ed-b1b6-c16d1d4c20d1 00:09:04.687 18:26:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:04.947 18:26:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:04.947 18:26:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:04.947 18:26:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:05.206 [2024-12-08 18:26:23.024878] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:05.206 18:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 554d233e-f1fb-44ed-b1b6-c16d1d4c20d1 00:09:05.206 18:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:05.206 18:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 554d233e-f1fb-44ed-b1b6-c16d1d4c20d1 00:09:05.206 18:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:05.206 18:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:05.206 18:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:05.206 18:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:05.206 18:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:05.206 18:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:05.206 18:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:05.206 18:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:05.206 18:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 554d233e-f1fb-44ed-b1b6-c16d1d4c20d1 00:09:05.465 request: 00:09:05.465 { 00:09:05.465 "uuid": "554d233e-f1fb-44ed-b1b6-c16d1d4c20d1", 00:09:05.465 "method": "bdev_lvol_get_lvstores", 00:09:05.465 "req_id": 1 00:09:05.465 } 00:09:05.465 Got JSON-RPC error response 00:09:05.465 response: 00:09:05.465 { 00:09:05.465 "code": -19, 00:09:05.465 "message": "No such device" 00:09:05.465 } 00:09:05.465 18:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:05.465 18:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:05.465 18:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:05.465 18:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:05.465 18:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:05.724 aio_bdev 00:09:05.724 18:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c5d0e6fa-3cd5-4d49-aec1-249d80faf762 00:09:05.724 18:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=c5d0e6fa-3cd5-4d49-aec1-249d80faf762 00:09:05.724 18:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:05.724 18:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:05.724 18:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:05.724 18:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:05.724 18:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:05.997 18:26:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c5d0e6fa-3cd5-4d49-aec1-249d80faf762 -t 2000 00:09:06.255 [ 00:09:06.255 { 00:09:06.255 "name": "c5d0e6fa-3cd5-4d49-aec1-249d80faf762", 00:09:06.255 "aliases": [ 00:09:06.255 "lvs/lvol" 00:09:06.255 ], 00:09:06.255 "product_name": "Logical Volume", 00:09:06.255 "block_size": 4096, 00:09:06.255 "num_blocks": 38912, 00:09:06.255 "uuid": "c5d0e6fa-3cd5-4d49-aec1-249d80faf762", 00:09:06.255 "assigned_rate_limits": { 00:09:06.255 "rw_ios_per_sec": 0, 00:09:06.255 "rw_mbytes_per_sec": 0, 00:09:06.255 "r_mbytes_per_sec": 0, 00:09:06.255 "w_mbytes_per_sec": 0 00:09:06.255 }, 00:09:06.255 "claimed": false, 00:09:06.255 "zoned": false, 00:09:06.255 "supported_io_types": { 00:09:06.255 "read": true, 00:09:06.255 "write": true, 00:09:06.255 "unmap": true, 00:09:06.255 "flush": false, 00:09:06.255 "reset": true, 00:09:06.255 "nvme_admin": false, 00:09:06.255 "nvme_io": false, 00:09:06.255 "nvme_io_md": false, 00:09:06.255 "write_zeroes": true, 00:09:06.255 "zcopy": false, 00:09:06.255 "get_zone_info": false, 00:09:06.255 "zone_management": false, 00:09:06.255 "zone_append": false, 00:09:06.255 "compare": false, 00:09:06.255 "compare_and_write": false, 00:09:06.255 "abort": false, 00:09:06.255 "seek_hole": true, 00:09:06.255 "seek_data": true, 00:09:06.255 "copy": false, 00:09:06.255 "nvme_iov_md": false 00:09:06.255 }, 00:09:06.255 "driver_specific": { 00:09:06.255 "lvol": { 00:09:06.255 "lvol_store_uuid": "554d233e-f1fb-44ed-b1b6-c16d1d4c20d1", 00:09:06.255 "base_bdev": "aio_bdev", 00:09:06.255 "thin_provision": false, 00:09:06.255 "num_allocated_clusters": 38, 00:09:06.255 "snapshot": false, 00:09:06.255 "clone": false, 00:09:06.255 "esnap_clone": false 00:09:06.255 } 00:09:06.255 } 00:09:06.255 } 00:09:06.255 ] 00:09:06.255 18:26:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:06.255 18:26:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 554d233e-f1fb-44ed-b1b6-c16d1d4c20d1 00:09:06.255 18:26:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:06.514 18:26:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:06.514 18:26:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 554d233e-f1fb-44ed-b1b6-c16d1d4c20d1 00:09:06.514 18:26:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:06.773 18:26:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:06.773 18:26:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c5d0e6fa-3cd5-4d49-aec1-249d80faf762 00:09:07.032 18:26:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 554d233e-f1fb-44ed-b1b6-c16d1d4c20d1 00:09:07.303 18:26:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:07.562 18:26:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:07.820 ************************************ 00:09:07.820 END TEST lvs_grow_clean 00:09:07.820 ************************************ 00:09:07.820 00:09:07.820 real 0m18.033s 00:09:07.820 user 0m17.063s 00:09:07.820 sys 0m2.454s 00:09:07.820 18:26:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.820 18:26:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:07.820 18:26:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:07.820 18:26:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:07.820 18:26:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.820 18:26:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:07.820 ************************************ 00:09:07.820 START TEST lvs_grow_dirty 00:09:07.820 ************************************ 00:09:07.820 18:26:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:07.820 18:26:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:07.820 18:26:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:07.820 18:26:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:07.820 18:26:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:07.820 18:26:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:07.820 18:26:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:07.820 18:26:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:07.820 18:26:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:07.820 18:26:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:08.388 18:26:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:08.388 18:26:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:08.647 18:26:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2347b70d-14e5-46ad-98f4-4467d53525d0 00:09:08.647 18:26:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2347b70d-14e5-46ad-98f4-4467d53525d0 00:09:08.647 18:26:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:08.905 18:26:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:08.905 18:26:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:08.905 18:26:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2347b70d-14e5-46ad-98f4-4467d53525d0 lvol 150 00:09:09.163 18:26:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ffba3044-3262-4eed-b652-53a23ba1ced3 00:09:09.163 18:26:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:09.163 18:26:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:09.163 [2024-12-08 18:26:27.083303] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:09.163 [2024-12-08 18:26:27.083433] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:09.163 true 00:09:09.420 18:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2347b70d-14e5-46ad-98f4-4467d53525d0 00:09:09.420 18:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:09.420 18:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:09.420 18:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:09.679 18:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ffba3044-3262-4eed-b652-53a23ba1ced3 00:09:09.936 18:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:10.195 [2024-12-08 18:26:27.979773] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:10.195 18:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:10.453 18:26:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=75574 00:09:10.453 18:26:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:10.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:10.453 18:26:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:10.453 18:26:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 75574 /var/tmp/bdevperf.sock 00:09:10.454 18:26:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 75574 ']' 00:09:10.454 18:26:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:10.454 18:26:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:10.454 18:26:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:10.454 18:26:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:10.454 18:26:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:10.454 [2024-12-08 18:26:28.317368] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:10.454 [2024-12-08 18:26:28.317845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75574 ] 00:09:10.729 [2024-12-08 18:26:28.459460] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.729 [2024-12-08 18:26:28.538003] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.729 [2024-12-08 18:26:28.592968] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:11.663 18:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:11.663 18:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:11.663 18:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:11.921 Nvme0n1 00:09:11.921 18:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:12.179 [ 00:09:12.179 { 00:09:12.179 "name": "Nvme0n1", 00:09:12.179 "aliases": [ 00:09:12.179 "ffba3044-3262-4eed-b652-53a23ba1ced3" 00:09:12.179 ], 00:09:12.179 "product_name": "NVMe disk", 00:09:12.179 "block_size": 4096, 00:09:12.179 "num_blocks": 38912, 00:09:12.179 "uuid": "ffba3044-3262-4eed-b652-53a23ba1ced3", 00:09:12.179 "numa_id": -1, 00:09:12.179 "assigned_rate_limits": { 00:09:12.179 "rw_ios_per_sec": 0, 00:09:12.179 "rw_mbytes_per_sec": 0, 00:09:12.179 "r_mbytes_per_sec": 0, 00:09:12.179 "w_mbytes_per_sec": 0 00:09:12.179 }, 00:09:12.179 "claimed": false, 00:09:12.179 "zoned": false, 00:09:12.179 "supported_io_types": { 00:09:12.179 "read": true, 00:09:12.179 "write": true, 00:09:12.179 "unmap": true, 00:09:12.179 "flush": true, 00:09:12.179 "reset": true, 00:09:12.179 "nvme_admin": true, 00:09:12.179 "nvme_io": true, 00:09:12.179 "nvme_io_md": false, 00:09:12.179 "write_zeroes": true, 00:09:12.179 "zcopy": false, 00:09:12.179 "get_zone_info": false, 00:09:12.179 "zone_management": false, 00:09:12.179 "zone_append": false, 00:09:12.179 "compare": true, 00:09:12.179 "compare_and_write": true, 00:09:12.179 "abort": true, 00:09:12.179 "seek_hole": false, 00:09:12.179 "seek_data": false, 00:09:12.179 "copy": true, 00:09:12.179 "nvme_iov_md": false 00:09:12.179 }, 00:09:12.179 "memory_domains": [ 00:09:12.179 { 00:09:12.179 "dma_device_id": "system", 00:09:12.179 "dma_device_type": 1 00:09:12.179 } 00:09:12.179 ], 00:09:12.179 "driver_specific": { 00:09:12.179 "nvme": [ 00:09:12.179 { 00:09:12.179 "trid": { 00:09:12.179 "trtype": "TCP", 00:09:12.179 "adrfam": "IPv4", 00:09:12.179 "traddr": "10.0.0.3", 00:09:12.179 "trsvcid": "4420", 00:09:12.179 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:12.179 }, 00:09:12.179 "ctrlr_data": { 00:09:12.179 "cntlid": 1, 00:09:12.179 "vendor_id": "0x8086", 00:09:12.179 "model_number": "SPDK bdev Controller", 00:09:12.179 "serial_number": "SPDK0", 00:09:12.179 "firmware_revision": "24.09.1", 00:09:12.179 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:12.179 "oacs": { 00:09:12.179 "security": 0, 00:09:12.179 "format": 0, 00:09:12.179 "firmware": 0, 00:09:12.179 "ns_manage": 0 00:09:12.179 }, 00:09:12.179 "multi_ctrlr": true, 00:09:12.180 "ana_reporting": false 00:09:12.180 }, 00:09:12.180 "vs": { 00:09:12.180 "nvme_version": "1.3" 00:09:12.180 }, 00:09:12.180 "ns_data": { 00:09:12.180 "id": 1, 00:09:12.180 "can_share": true 00:09:12.180 } 00:09:12.180 } 00:09:12.180 ], 00:09:12.180 "mp_policy": "active_passive" 00:09:12.180 } 00:09:12.180 } 00:09:12.180 ] 00:09:12.180 18:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=75603 00:09:12.180 18:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:12.180 18:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:12.180 Running I/O for 10 seconds... 00:09:13.115 Latency(us) 00:09:13.115 [2024-12-08T18:26:31.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.115 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.115 Nvme0n1 : 1.00 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:09:13.115 [2024-12-08T18:26:31.045Z] =================================================================================================================== 00:09:13.115 [2024-12-08T18:26:31.045Z] Total : 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:09:13.115 00:09:14.050 18:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2347b70d-14e5-46ad-98f4-4467d53525d0 00:09:14.309 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.309 Nvme0n1 : 2.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:09:14.309 [2024-12-08T18:26:32.239Z] =================================================================================================================== 00:09:14.309 [2024-12-08T18:26:32.239Z] Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:09:14.309 00:09:14.309 true 00:09:14.309 18:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2347b70d-14e5-46ad-98f4-4467d53525d0 00:09:14.309 18:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:14.878 18:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:14.878 18:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:14.878 18:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 75603 00:09:15.137 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.137 Nvme0n1 : 3.00 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:09:15.137 [2024-12-08T18:26:33.067Z] =================================================================================================================== 00:09:15.137 [2024-12-08T18:26:33.067Z] Total : 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:09:15.137 00:09:16.074 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.074 Nvme0n1 : 4.00 7746.25 30.26 0.00 0.00 0.00 0.00 0.00 00:09:16.074 [2024-12-08T18:26:34.004Z] =================================================================================================================== 00:09:16.074 [2024-12-08T18:26:34.004Z] Total : 7746.25 30.26 0.00 0.00 0.00 0.00 0.00 00:09:16.074 00:09:17.501 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.501 Nvme0n1 : 5.00 7771.80 30.36 0.00 0.00 0.00 0.00 0.00 00:09:17.501 [2024-12-08T18:26:35.431Z] =================================================================================================================== 00:09:17.501 [2024-12-08T18:26:35.431Z] Total : 7771.80 30.36 0.00 0.00 0.00 0.00 0.00 00:09:17.501 00:09:18.068 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.068 Nvme0n1 : 6.00 7831.17 30.59 0.00 0.00 0.00 0.00 0.00 00:09:18.068 [2024-12-08T18:26:35.998Z] =================================================================================================================== 00:09:18.068 [2024-12-08T18:26:35.998Z] Total : 7831.17 30.59 0.00 0.00 0.00 0.00 0.00 00:09:18.068 00:09:19.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.446 Nvme0n1 : 7.00 7759.71 30.31 0.00 0.00 0.00 0.00 0.00 00:09:19.446 [2024-12-08T18:26:37.376Z] =================================================================================================================== 00:09:19.446 [2024-12-08T18:26:37.376Z] Total : 7759.71 30.31 0.00 0.00 0.00 0.00 0.00 00:09:19.446 00:09:20.382 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.382 Nvme0n1 : 8.00 7599.38 29.69 0.00 0.00 0.00 0.00 0.00 00:09:20.383 [2024-12-08T18:26:38.313Z] =================================================================================================================== 00:09:20.383 [2024-12-08T18:26:38.313Z] Total : 7599.38 29.69 0.00 0.00 0.00 0.00 0.00 00:09:20.383 00:09:21.320 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.320 Nvme0n1 : 9.00 7474.67 29.20 0.00 0.00 0.00 0.00 0.00 00:09:21.320 [2024-12-08T18:26:39.250Z] =================================================================================================================== 00:09:21.320 [2024-12-08T18:26:39.250Z] Total : 7474.67 29.20 0.00 0.00 0.00 0.00 0.00 00:09:21.320 00:09:22.258 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.258 Nvme0n1 : 10.00 7387.60 28.86 0.00 0.00 0.00 0.00 0.00 00:09:22.258 [2024-12-08T18:26:40.188Z] =================================================================================================================== 00:09:22.258 [2024-12-08T18:26:40.188Z] Total : 7387.60 28.86 0.00 0.00 0.00 0.00 0.00 00:09:22.258 00:09:22.258 00:09:22.258 Latency(us) 00:09:22.258 [2024-12-08T18:26:40.188Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.258 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.258 Nvme0n1 : 10.01 7394.65 28.89 0.00 0.00 17305.68 5034.36 101997.85 00:09:22.258 [2024-12-08T18:26:40.188Z] =================================================================================================================== 00:09:22.258 [2024-12-08T18:26:40.188Z] Total : 7394.65 28.89 0.00 0.00 17305.68 5034.36 101997.85 00:09:22.258 { 00:09:22.258 "results": [ 00:09:22.258 { 00:09:22.258 "job": "Nvme0n1", 00:09:22.258 "core_mask": "0x2", 00:09:22.258 "workload": "randwrite", 00:09:22.258 "status": "finished", 00:09:22.258 "queue_depth": 128, 00:09:22.258 "io_size": 4096, 00:09:22.258 "runtime": 10.007779, 00:09:22.258 "iops": 7394.647703551407, 00:09:22.258 "mibps": 28.885342591997684, 00:09:22.258 "io_failed": 0, 00:09:22.258 "io_timeout": 0, 00:09:22.258 "avg_latency_us": 17305.68483394018, 00:09:22.258 "min_latency_us": 5034.356363636363, 00:09:22.259 "max_latency_us": 101997.84727272727 00:09:22.259 } 00:09:22.259 ], 00:09:22.259 "core_count": 1 00:09:22.259 } 00:09:22.259 18:26:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 75574 00:09:22.259 18:26:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 75574 ']' 00:09:22.259 18:26:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 75574 00:09:22.259 18:26:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:22.259 18:26:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:22.259 18:26:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75574 00:09:22.259 killing process with pid 75574 00:09:22.259 Received shutdown signal, test time was about 10.000000 seconds 00:09:22.259 00:09:22.259 Latency(us) 00:09:22.259 [2024-12-08T18:26:40.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.259 [2024-12-08T18:26:40.189Z] =================================================================================================================== 00:09:22.259 [2024-12-08T18:26:40.189Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:22.259 18:26:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:22.259 18:26:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:22.259 18:26:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75574' 00:09:22.259 18:26:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 75574 00:09:22.259 18:26:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 75574 00:09:22.517 18:26:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:22.775 18:26:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:23.032 18:26:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:23.032 18:26:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2347b70d-14e5-46ad-98f4-4467d53525d0 00:09:23.291 18:26:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:23.291 18:26:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:23.291 18:26:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 75227 00:09:23.291 18:26:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 75227 00:09:23.291 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 75227 Killed "${NVMF_APP[@]}" "$@" 00:09:23.291 18:26:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:23.291 18:26:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:23.291 18:26:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:23.291 18:26:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:23.291 18:26:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:23.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.291 18:26:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=75736 00:09:23.291 18:26:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:23.291 18:26:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 75736 00:09:23.291 18:26:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 75736 ']' 00:09:23.291 18:26:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.291 18:26:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:23.291 18:26:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.291 18:26:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:23.291 18:26:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:23.291 [2024-12-08 18:26:41.125818] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:23.291 [2024-12-08 18:26:41.126088] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.549 [2024-12-08 18:26:41.260703] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.549 [2024-12-08 18:26:41.323735] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.549 [2024-12-08 18:26:41.324048] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.549 [2024-12-08 18:26:41.324082] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.549 [2024-12-08 18:26:41.324091] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.549 [2024-12-08 18:26:41.324098] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.549 [2024-12-08 18:26:41.324132] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.549 [2024-12-08 18:26:41.377673] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:24.116 18:26:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:24.116 18:26:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:24.116 18:26:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:24.116 18:26:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:24.116 18:26:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:24.376 18:26:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.376 18:26:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:24.635 [2024-12-08 18:26:42.338037] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:24.635 [2024-12-08 18:26:42.338570] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:24.635 [2024-12-08 18:26:42.338906] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:24.635 18:26:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:24.635 18:26:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ffba3044-3262-4eed-b652-53a23ba1ced3 00:09:24.635 18:26:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=ffba3044-3262-4eed-b652-53a23ba1ced3 00:09:24.635 18:26:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:24.635 18:26:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:24.635 18:26:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:24.635 18:26:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:24.635 18:26:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:24.894 18:26:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ffba3044-3262-4eed-b652-53a23ba1ced3 -t 2000 00:09:25.154 [ 00:09:25.154 { 00:09:25.154 "name": "ffba3044-3262-4eed-b652-53a23ba1ced3", 00:09:25.154 "aliases": [ 00:09:25.154 "lvs/lvol" 00:09:25.154 ], 00:09:25.154 "product_name": "Logical Volume", 00:09:25.154 "block_size": 4096, 00:09:25.154 "num_blocks": 38912, 00:09:25.154 "uuid": "ffba3044-3262-4eed-b652-53a23ba1ced3", 00:09:25.154 "assigned_rate_limits": { 00:09:25.154 "rw_ios_per_sec": 0, 00:09:25.154 "rw_mbytes_per_sec": 0, 00:09:25.154 "r_mbytes_per_sec": 0, 00:09:25.154 "w_mbytes_per_sec": 0 00:09:25.154 }, 00:09:25.154 "claimed": false, 00:09:25.154 "zoned": false, 00:09:25.154 "supported_io_types": { 00:09:25.154 "read": true, 00:09:25.154 "write": true, 00:09:25.154 "unmap": true, 00:09:25.154 "flush": false, 00:09:25.154 "reset": true, 00:09:25.154 "nvme_admin": false, 00:09:25.154 "nvme_io": false, 00:09:25.154 "nvme_io_md": false, 00:09:25.154 "write_zeroes": true, 00:09:25.154 "zcopy": false, 00:09:25.154 "get_zone_info": false, 00:09:25.154 "zone_management": false, 00:09:25.154 "zone_append": false, 00:09:25.154 "compare": false, 00:09:25.154 "compare_and_write": false, 00:09:25.154 "abort": false, 00:09:25.154 "seek_hole": true, 00:09:25.154 "seek_data": true, 00:09:25.154 "copy": false, 00:09:25.154 "nvme_iov_md": false 00:09:25.154 }, 00:09:25.154 "driver_specific": { 00:09:25.154 "lvol": { 00:09:25.154 "lvol_store_uuid": "2347b70d-14e5-46ad-98f4-4467d53525d0", 00:09:25.154 "base_bdev": "aio_bdev", 00:09:25.154 "thin_provision": false, 00:09:25.154 "num_allocated_clusters": 38, 00:09:25.154 "snapshot": false, 00:09:25.154 "clone": false, 00:09:25.154 "esnap_clone": false 00:09:25.154 } 00:09:25.154 } 00:09:25.154 } 00:09:25.154 ] 00:09:25.154 18:26:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:25.154 18:26:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2347b70d-14e5-46ad-98f4-4467d53525d0 00:09:25.154 18:26:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:25.413 18:26:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:25.413 18:26:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:25.413 18:26:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2347b70d-14e5-46ad-98f4-4467d53525d0 00:09:25.676 18:26:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:25.676 18:26:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:25.935 [2024-12-08 18:26:43.672289] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:25.935 18:26:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2347b70d-14e5-46ad-98f4-4467d53525d0 00:09:25.935 18:26:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:25.935 18:26:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2347b70d-14e5-46ad-98f4-4467d53525d0 00:09:25.935 18:26:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:25.935 18:26:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.935 18:26:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:25.935 18:26:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.935 18:26:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:25.935 18:26:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.935 18:26:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:25.935 18:26:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:25.935 18:26:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2347b70d-14e5-46ad-98f4-4467d53525d0 00:09:26.193 request: 00:09:26.193 { 00:09:26.193 "uuid": "2347b70d-14e5-46ad-98f4-4467d53525d0", 00:09:26.193 "method": "bdev_lvol_get_lvstores", 00:09:26.193 "req_id": 1 00:09:26.193 } 00:09:26.193 Got JSON-RPC error response 00:09:26.193 response: 00:09:26.193 { 00:09:26.193 "code": -19, 00:09:26.193 "message": "No such device" 00:09:26.193 } 00:09:26.193 18:26:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:26.193 18:26:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:26.193 18:26:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:26.193 18:26:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:26.193 18:26:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:26.452 aio_bdev 00:09:26.452 18:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ffba3044-3262-4eed-b652-53a23ba1ced3 00:09:26.452 18:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=ffba3044-3262-4eed-b652-53a23ba1ced3 00:09:26.452 18:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:26.452 18:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:26.452 18:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:26.452 18:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:26.452 18:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:26.710 18:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ffba3044-3262-4eed-b652-53a23ba1ced3 -t 2000 00:09:26.968 [ 00:09:26.968 { 00:09:26.968 "name": "ffba3044-3262-4eed-b652-53a23ba1ced3", 00:09:26.968 "aliases": [ 00:09:26.968 "lvs/lvol" 00:09:26.968 ], 00:09:26.968 "product_name": "Logical Volume", 00:09:26.968 "block_size": 4096, 00:09:26.968 "num_blocks": 38912, 00:09:26.968 "uuid": "ffba3044-3262-4eed-b652-53a23ba1ced3", 00:09:26.968 "assigned_rate_limits": { 00:09:26.968 "rw_ios_per_sec": 0, 00:09:26.968 "rw_mbytes_per_sec": 0, 00:09:26.968 "r_mbytes_per_sec": 0, 00:09:26.968 "w_mbytes_per_sec": 0 00:09:26.968 }, 00:09:26.968 "claimed": false, 00:09:26.968 "zoned": false, 00:09:26.968 "supported_io_types": { 00:09:26.968 "read": true, 00:09:26.968 "write": true, 00:09:26.968 "unmap": true, 00:09:26.968 "flush": false, 00:09:26.968 "reset": true, 00:09:26.968 "nvme_admin": false, 00:09:26.968 "nvme_io": false, 00:09:26.968 "nvme_io_md": false, 00:09:26.968 "write_zeroes": true, 00:09:26.968 "zcopy": false, 00:09:26.968 "get_zone_info": false, 00:09:26.968 "zone_management": false, 00:09:26.968 "zone_append": false, 00:09:26.968 "compare": false, 00:09:26.968 "compare_and_write": false, 00:09:26.968 "abort": false, 00:09:26.968 "seek_hole": true, 00:09:26.968 "seek_data": true, 00:09:26.968 "copy": false, 00:09:26.968 "nvme_iov_md": false 00:09:26.968 }, 00:09:26.968 "driver_specific": { 00:09:26.968 "lvol": { 00:09:26.968 "lvol_store_uuid": "2347b70d-14e5-46ad-98f4-4467d53525d0", 00:09:26.968 "base_bdev": "aio_bdev", 00:09:26.968 "thin_provision": false, 00:09:26.968 "num_allocated_clusters": 38, 00:09:26.968 "snapshot": false, 00:09:26.968 "clone": false, 00:09:26.968 "esnap_clone": false 00:09:26.968 } 00:09:26.968 } 00:09:26.968 } 00:09:26.968 ] 00:09:26.968 18:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:26.968 18:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2347b70d-14e5-46ad-98f4-4467d53525d0 00:09:26.968 18:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:27.226 18:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:27.226 18:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2347b70d-14e5-46ad-98f4-4467d53525d0 00:09:27.226 18:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:27.484 18:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:27.484 18:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ffba3044-3262-4eed-b652-53a23ba1ced3 00:09:27.742 18:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2347b70d-14e5-46ad-98f4-4467d53525d0 00:09:28.000 18:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:28.258 18:26:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:28.518 ************************************ 00:09:28.518 END TEST lvs_grow_dirty 00:09:28.518 ************************************ 00:09:28.518 00:09:28.518 real 0m20.659s 00:09:28.518 user 0m41.354s 00:09:28.518 sys 0m9.475s 00:09:28.518 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.518 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:28.518 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:28.518 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:28.518 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:28.518 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:28.518 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:28.518 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:28.518 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:28.518 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:28.518 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:28.518 nvmf_trace.0 00:09:28.776 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:28.776 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:28.776 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:28.776 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:29.033 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:29.033 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:29.033 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:29.033 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:29.033 rmmod nvme_tcp 00:09:29.033 rmmod nvme_fabrics 00:09:29.033 rmmod nvme_keyring 00:09:29.033 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:29.290 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:29.290 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:29.290 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 75736 ']' 00:09:29.290 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 75736 00:09:29.290 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 75736 ']' 00:09:29.290 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 75736 00:09:29.291 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:29.291 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:29.291 18:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75736 00:09:29.291 killing process with pid 75736 00:09:29.291 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:29.291 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:29.291 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75736' 00:09:29.291 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 75736 00:09:29.291 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 75736 00:09:29.291 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:29.291 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:29.291 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:29.291 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:29.291 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:09:29.291 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:29.291 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:09:29.291 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:29.291 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:29.291 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:29.549 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:29.549 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:29.549 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:29.549 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:29.549 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:29.549 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:29.549 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:29.549 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:29.549 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:29.549 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:29.549 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:29.549 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:29.549 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:29.549 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.549 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.549 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.549 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:09:29.549 00:09:29.549 real 0m41.657s 00:09:29.549 user 1m5.149s 00:09:29.549 sys 0m12.975s 00:09:29.549 ************************************ 00:09:29.549 END TEST nvmf_lvs_grow 00:09:29.549 ************************************ 00:09:29.549 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.549 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:29.809 ************************************ 00:09:29.809 START TEST nvmf_bdev_io_wait 00:09:29.809 ************************************ 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:29.809 * Looking for test storage... 00:09:29.809 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:29.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.809 --rc genhtml_branch_coverage=1 00:09:29.809 --rc genhtml_function_coverage=1 00:09:29.809 --rc genhtml_legend=1 00:09:29.809 --rc geninfo_all_blocks=1 00:09:29.809 --rc geninfo_unexecuted_blocks=1 00:09:29.809 00:09:29.809 ' 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:29.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.809 --rc genhtml_branch_coverage=1 00:09:29.809 --rc genhtml_function_coverage=1 00:09:29.809 --rc genhtml_legend=1 00:09:29.809 --rc geninfo_all_blocks=1 00:09:29.809 --rc geninfo_unexecuted_blocks=1 00:09:29.809 00:09:29.809 ' 00:09:29.809 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:29.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.809 --rc genhtml_branch_coverage=1 00:09:29.809 --rc genhtml_function_coverage=1 00:09:29.809 --rc genhtml_legend=1 00:09:29.809 --rc geninfo_all_blocks=1 00:09:29.809 --rc geninfo_unexecuted_blocks=1 00:09:29.809 00:09:29.809 ' 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:29.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.810 --rc genhtml_branch_coverage=1 00:09:29.810 --rc genhtml_function_coverage=1 00:09:29.810 --rc genhtml_legend=1 00:09:29.810 --rc geninfo_all_blocks=1 00:09:29.810 --rc geninfo_unexecuted_blocks=1 00:09:29.810 00:09:29.810 ' 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:29.810 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:29.810 Cannot find device "nvmf_init_br" 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:29.810 Cannot find device "nvmf_init_br2" 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:29.810 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:30.070 Cannot find device "nvmf_tgt_br" 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:30.070 Cannot find device "nvmf_tgt_br2" 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:30.070 Cannot find device "nvmf_init_br" 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:30.070 Cannot find device "nvmf_init_br2" 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:30.070 Cannot find device "nvmf_tgt_br" 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:30.070 Cannot find device "nvmf_tgt_br2" 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:30.070 Cannot find device "nvmf_br" 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:30.070 Cannot find device "nvmf_init_if" 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:30.070 Cannot find device "nvmf_init_if2" 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:30.070 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:30.070 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:30.070 18:26:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:30.330 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:30.330 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:09:30.330 00:09:30.330 --- 10.0.0.3 ping statistics --- 00:09:30.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.330 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:30.330 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:30.330 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:09:30.330 00:09:30.330 --- 10.0.0.4 ping statistics --- 00:09:30.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.330 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:30.330 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:30.330 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:09:30.330 00:09:30.330 --- 10.0.0.1 ping statistics --- 00:09:30.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.330 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:30.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:30.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:09:30.330 00:09:30.330 --- 10.0.0.2 ping statistics --- 00:09:30.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.330 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # return 0 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=76106 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 76106 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 76106 ']' 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:30.330 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:30.330 [2024-12-08 18:26:48.173234] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:30.330 [2024-12-08 18:26:48.173815] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.590 [2024-12-08 18:26:48.310792] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.590 [2024-12-08 18:26:48.378313] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.590 [2024-12-08 18:26:48.378678] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.590 [2024-12-08 18:26:48.378808] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.590 [2024-12-08 18:26:48.378990] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.590 [2024-12-08 18:26:48.379021] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.590 [2024-12-08 18:26:48.379179] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.590 [2024-12-08 18:26:48.379501] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.590 [2024-12-08 18:26:48.379573] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:30.590 [2024-12-08 18:26:48.379577] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.590 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:30.590 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:30.590 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:30.590 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:30.590 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:30.590 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.590 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:30.590 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.590 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:30.590 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.590 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:30.590 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.590 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:30.850 [2024-12-08 18:26:48.542310] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:30.850 [2024-12-08 18:26:48.558796] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:30.850 Malloc0 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:30.850 [2024-12-08 18:26:48.629316] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=76134 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=76136 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:30.850 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=76138 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:30.851 { 00:09:30.851 "params": { 00:09:30.851 "name": "Nvme$subsystem", 00:09:30.851 "trtype": "$TEST_TRANSPORT", 00:09:30.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:30.851 "adrfam": "ipv4", 00:09:30.851 "trsvcid": "$NVMF_PORT", 00:09:30.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:30.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:30.851 "hdgst": ${hdgst:-false}, 00:09:30.851 "ddgst": ${ddgst:-false} 00:09:30.851 }, 00:09:30.851 "method": "bdev_nvme_attach_controller" 00:09:30.851 } 00:09:30.851 EOF 00:09:30.851 )") 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=76140 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:30.851 { 00:09:30.851 "params": { 00:09:30.851 "name": "Nvme$subsystem", 00:09:30.851 "trtype": "$TEST_TRANSPORT", 00:09:30.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:30.851 "adrfam": "ipv4", 00:09:30.851 "trsvcid": "$NVMF_PORT", 00:09:30.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:30.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:30.851 "hdgst": ${hdgst:-false}, 00:09:30.851 "ddgst": ${ddgst:-false} 00:09:30.851 }, 00:09:30.851 "method": "bdev_nvme_attach_controller" 00:09:30.851 } 00:09:30.851 EOF 00:09:30.851 )") 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:30.851 { 00:09:30.851 "params": { 00:09:30.851 "name": "Nvme$subsystem", 00:09:30.851 "trtype": "$TEST_TRANSPORT", 00:09:30.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:30.851 "adrfam": "ipv4", 00:09:30.851 "trsvcid": "$NVMF_PORT", 00:09:30.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:30.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:30.851 "hdgst": ${hdgst:-false}, 00:09:30.851 "ddgst": ${ddgst:-false} 00:09:30.851 }, 00:09:30.851 "method": "bdev_nvme_attach_controller" 00:09:30.851 } 00:09:30.851 EOF 00:09:30.851 )") 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:30.851 { 00:09:30.851 "params": { 00:09:30.851 "name": "Nvme$subsystem", 00:09:30.851 "trtype": "$TEST_TRANSPORT", 00:09:30.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:30.851 "adrfam": "ipv4", 00:09:30.851 "trsvcid": "$NVMF_PORT", 00:09:30.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:30.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:30.851 "hdgst": ${hdgst:-false}, 00:09:30.851 "ddgst": ${ddgst:-false} 00:09:30.851 }, 00:09:30.851 "method": "bdev_nvme_attach_controller" 00:09:30.851 } 00:09:30.851 EOF 00:09:30.851 )") 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:30.851 "params": { 00:09:30.851 "name": "Nvme1", 00:09:30.851 "trtype": "tcp", 00:09:30.851 "traddr": "10.0.0.3", 00:09:30.851 "adrfam": "ipv4", 00:09:30.851 "trsvcid": "4420", 00:09:30.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:30.851 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:30.851 "hdgst": false, 00:09:30.851 "ddgst": false 00:09:30.851 }, 00:09:30.851 "method": "bdev_nvme_attach_controller" 00:09:30.851 }' 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:30.851 "params": { 00:09:30.851 "name": "Nvme1", 00:09:30.851 "trtype": "tcp", 00:09:30.851 "traddr": "10.0.0.3", 00:09:30.851 "adrfam": "ipv4", 00:09:30.851 "trsvcid": "4420", 00:09:30.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:30.851 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:30.851 "hdgst": false, 00:09:30.851 "ddgst": false 00:09:30.851 }, 00:09:30.851 "method": "bdev_nvme_attach_controller" 00:09:30.851 }' 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:30.851 "params": { 00:09:30.851 "name": "Nvme1", 00:09:30.851 "trtype": "tcp", 00:09:30.851 "traddr": "10.0.0.3", 00:09:30.851 "adrfam": "ipv4", 00:09:30.851 "trsvcid": "4420", 00:09:30.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:30.851 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:30.851 "hdgst": false, 00:09:30.851 "ddgst": false 00:09:30.851 }, 00:09:30.851 "method": "bdev_nvme_attach_controller" 00:09:30.851 }' 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:30.851 "params": { 00:09:30.851 "name": "Nvme1", 00:09:30.851 "trtype": "tcp", 00:09:30.851 "traddr": "10.0.0.3", 00:09:30.851 "adrfam": "ipv4", 00:09:30.851 "trsvcid": "4420", 00:09:30.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:30.851 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:30.851 "hdgst": false, 00:09:30.851 "ddgst": false 00:09:30.851 }, 00:09:30.851 "method": "bdev_nvme_attach_controller" 00:09:30.851 }' 00:09:30.851 [2024-12-08 18:26:48.694276] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:30.851 [2024-12-08 18:26:48.694552] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:30.851 [2024-12-08 18:26:48.698229] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:30.851 [2024-12-08 18:26:48.698309] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:30.851 18:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 76134 00:09:30.851 [2024-12-08 18:26:48.727293] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:30.851 [2024-12-08 18:26:48.727374] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:30.851 [2024-12-08 18:26:48.753159] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:30.851 [2024-12-08 18:26:48.753301] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:31.111 [2024-12-08 18:26:48.907610] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.111 [2024-12-08 18:26:48.974261] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.111 [2024-12-08 18:26:48.987825] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:09:31.369 [2024-12-08 18:26:49.040677] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:09:31.369 [2024-12-08 18:26:49.046765] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.369 [2024-12-08 18:26:49.050757] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:31.369 [2024-12-08 18:26:49.088489] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:31.369 [2024-12-08 18:26:49.111927] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:09:31.369 [2024-12-08 18:26:49.125044] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.369 [2024-12-08 18:26:49.195739] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:31.369 [2024-12-08 18:26:49.199478] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:09:31.369 Running I/O for 1 seconds... 00:09:31.369 Running I/O for 1 seconds... 00:09:31.369 [2024-12-08 18:26:49.265971] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:31.628 Running I/O for 1 seconds... 00:09:31.628 Running I/O for 1 seconds... 00:09:32.563 171816.00 IOPS, 671.16 MiB/s 00:09:32.563 Latency(us) 00:09:32.563 [2024-12-08T18:26:50.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.563 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:32.563 Nvme1n1 : 1.00 171498.67 669.92 0.00 0.00 742.66 424.49 2010.76 00:09:32.563 [2024-12-08T18:26:50.493Z] =================================================================================================================== 00:09:32.563 [2024-12-08T18:26:50.493Z] Total : 171498.67 669.92 0.00 0.00 742.66 424.49 2010.76 00:09:32.563 10538.00 IOPS, 41.16 MiB/s 00:09:32.563 Latency(us) 00:09:32.563 [2024-12-08T18:26:50.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.564 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:32.564 Nvme1n1 : 1.01 10592.34 41.38 0.00 0.00 12034.28 5719.51 19303.33 00:09:32.564 [2024-12-08T18:26:50.494Z] =================================================================================================================== 00:09:32.564 [2024-12-08T18:26:50.494Z] Total : 10592.34 41.38 0.00 0.00 12034.28 5719.51 19303.33 00:09:32.564 7119.00 IOPS, 27.81 MiB/s 00:09:32.564 Latency(us) 00:09:32.564 [2024-12-08T18:26:50.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.564 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:32.564 Nvme1n1 : 1.01 7161.34 27.97 0.00 0.00 17759.40 9353.77 26691.03 00:09:32.564 [2024-12-08T18:26:50.494Z] =================================================================================================================== 00:09:32.564 [2024-12-08T18:26:50.494Z] Total : 7161.34 27.97 0.00 0.00 17759.40 9353.77 26691.03 00:09:32.564 8912.00 IOPS, 34.81 MiB/s 00:09:32.564 Latency(us) 00:09:32.564 [2024-12-08T18:26:50.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.564 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:32.564 Nvme1n1 : 1.01 8992.53 35.13 0.00 0.00 14175.97 6255.71 22639.71 00:09:32.564 [2024-12-08T18:26:50.494Z] =================================================================================================================== 00:09:32.564 [2024-12-08T18:26:50.494Z] Total : 8992.53 35.13 0.00 0.00 14175.97 6255.71 22639.71 00:09:32.564 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 76136 00:09:32.823 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 76138 00:09:32.823 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 76140 00:09:32.823 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:32.823 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.823 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.823 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.823 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:32.823 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:32.823 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:32.823 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:32.823 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:32.823 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:32.823 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:32.823 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:32.823 rmmod nvme_tcp 00:09:32.823 rmmod nvme_fabrics 00:09:32.823 rmmod nvme_keyring 00:09:32.823 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:32.823 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:32.823 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:32.823 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 76106 ']' 00:09:32.823 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 76106 00:09:32.823 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 76106 ']' 00:09:32.823 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 76106 00:09:32.823 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:32.823 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:32.823 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76106 00:09:33.083 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:33.083 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:33.083 killing process with pid 76106 00:09:33.083 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76106' 00:09:33.083 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 76106 00:09:33.083 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 76106 00:09:33.083 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:33.083 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:33.083 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:33.083 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:33.083 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:09:33.083 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:33.083 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:09:33.083 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:33.083 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:33.083 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:33.083 18:26:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:33.083 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:33.343 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:33.343 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:33.343 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:33.343 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:33.343 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:33.343 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:33.343 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:33.343 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:33.343 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:33.343 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:33.343 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:33.343 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.343 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.343 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.343 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:09:33.343 00:09:33.343 real 0m3.755s 00:09:33.343 user 0m14.785s 00:09:33.343 sys 0m2.540s 00:09:33.343 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:33.343 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.343 ************************************ 00:09:33.343 END TEST nvmf_bdev_io_wait 00:09:33.343 ************************************ 00:09:33.604 18:26:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:33.604 18:26:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:33.604 18:26:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:33.604 18:26:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:33.604 ************************************ 00:09:33.604 START TEST nvmf_queue_depth 00:09:33.604 ************************************ 00:09:33.604 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:33.604 * Looking for test storage... 00:09:33.604 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:33.604 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:33.604 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:09:33.604 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:33.604 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:33.604 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.604 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.604 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.604 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.604 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.604 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.604 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.604 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.604 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.604 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.604 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.604 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:33.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.605 --rc genhtml_branch_coverage=1 00:09:33.605 --rc genhtml_function_coverage=1 00:09:33.605 --rc genhtml_legend=1 00:09:33.605 --rc geninfo_all_blocks=1 00:09:33.605 --rc geninfo_unexecuted_blocks=1 00:09:33.605 00:09:33.605 ' 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:33.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.605 --rc genhtml_branch_coverage=1 00:09:33.605 --rc genhtml_function_coverage=1 00:09:33.605 --rc genhtml_legend=1 00:09:33.605 --rc geninfo_all_blocks=1 00:09:33.605 --rc geninfo_unexecuted_blocks=1 00:09:33.605 00:09:33.605 ' 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:33.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.605 --rc genhtml_branch_coverage=1 00:09:33.605 --rc genhtml_function_coverage=1 00:09:33.605 --rc genhtml_legend=1 00:09:33.605 --rc geninfo_all_blocks=1 00:09:33.605 --rc geninfo_unexecuted_blocks=1 00:09:33.605 00:09:33.605 ' 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:33.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.605 --rc genhtml_branch_coverage=1 00:09:33.605 --rc genhtml_function_coverage=1 00:09:33.605 --rc genhtml_legend=1 00:09:33.605 --rc geninfo_all_blocks=1 00:09:33.605 --rc geninfo_unexecuted_blocks=1 00:09:33.605 00:09:33.605 ' 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:33.605 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:33.605 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:33.606 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:33.606 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:33.606 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:33.606 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:33.606 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:33.606 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.606 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:33.606 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:33.606 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:33.606 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:33.606 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:33.606 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:33.606 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.606 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:33.606 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:33.606 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:33.606 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:33.606 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:33.606 Cannot find device "nvmf_init_br" 00:09:33.606 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:33.606 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:33.606 Cannot find device "nvmf_init_br2" 00:09:33.606 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:33.606 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:33.866 Cannot find device "nvmf_tgt_br" 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:33.866 Cannot find device "nvmf_tgt_br2" 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:33.866 Cannot find device "nvmf_init_br" 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:33.866 Cannot find device "nvmf_init_br2" 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:33.866 Cannot find device "nvmf_tgt_br" 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:33.866 Cannot find device "nvmf_tgt_br2" 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:33.866 Cannot find device "nvmf_br" 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:33.866 Cannot find device "nvmf_init_if" 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:33.866 Cannot find device "nvmf_init_if2" 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:33.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:33.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:33.866 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:34.127 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:34.127 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:09:34.127 00:09:34.127 --- 10.0.0.3 ping statistics --- 00:09:34.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.127 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:34.127 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:34.127 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:09:34.127 00:09:34.127 --- 10.0.0.4 ping statistics --- 00:09:34.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.127 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:34.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:09:34.127 00:09:34.127 --- 10.0.0.1 ping statistics --- 00:09:34.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.127 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:34.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:09:34.127 00:09:34.127 --- 10.0.0.2 ping statistics --- 00:09:34.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.127 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # return 0 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=76402 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 76402 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 76402 ']' 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.127 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:34.128 18:26:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.128 [2024-12-08 18:26:51.937621] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:34.128 [2024-12-08 18:26:51.937881] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.387 [2024-12-08 18:26:52.084386] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.387 [2024-12-08 18:26:52.149873] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.387 [2024-12-08 18:26:52.150190] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.387 [2024-12-08 18:26:52.150385] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.388 [2024-12-08 18:26:52.150660] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.388 [2024-12-08 18:26:52.150704] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.388 [2024-12-08 18:26:52.150888] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.388 [2024-12-08 18:26:52.206075] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:34.956 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.956 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:34.956 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:34.956 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:34.956 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.216 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.216 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:35.216 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.216 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.216 [2024-12-08 18:26:52.918400] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:35.216 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.216 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:35.216 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.216 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.216 Malloc0 00:09:35.217 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.217 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:35.217 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.217 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.217 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.217 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:35.217 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.217 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.217 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.217 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:35.217 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.217 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.217 [2024-12-08 18:26:52.978918] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:35.217 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.217 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=76435 00:09:35.217 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:35.217 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:35.217 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 76435 /var/tmp/bdevperf.sock 00:09:35.217 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 76435 ']' 00:09:35.217 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:35.217 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:35.217 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:35.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:35.217 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:35.217 18:26:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.217 [2024-12-08 18:26:53.032202] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:35.217 [2024-12-08 18:26:53.032528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76435 ] 00:09:35.477 [2024-12-08 18:26:53.168184] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.477 [2024-12-08 18:26:53.234678] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.477 [2024-12-08 18:26:53.291013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:35.477 18:26:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:35.477 18:26:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:35.477 18:26:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:35.477 18:26:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.477 18:26:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.736 NVMe0n1 00:09:35.736 18:26:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.736 18:26:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:35.736 Running I/O for 10 seconds... 00:09:37.611 7800.00 IOPS, 30.47 MiB/s [2024-12-08T18:26:56.928Z] 8337.50 IOPS, 32.57 MiB/s [2024-12-08T18:26:57.865Z] 8784.33 IOPS, 34.31 MiB/s [2024-12-08T18:26:58.804Z] 9007.25 IOPS, 35.18 MiB/s [2024-12-08T18:26:59.756Z] 9044.00 IOPS, 35.33 MiB/s [2024-12-08T18:27:00.691Z] 9151.67 IOPS, 35.75 MiB/s [2024-12-08T18:27:01.626Z] 9218.00 IOPS, 36.01 MiB/s [2024-12-08T18:27:02.559Z] 9241.38 IOPS, 36.10 MiB/s [2024-12-08T18:27:03.956Z] 9314.44 IOPS, 36.38 MiB/s [2024-12-08T18:27:03.956Z] 9341.40 IOPS, 36.49 MiB/s 00:09:46.026 Latency(us) 00:09:46.026 [2024-12-08T18:27:03.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.026 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:46.026 Verification LBA range: start 0x0 length 0x4000 00:09:46.026 NVMe0n1 : 10.06 9382.07 36.65 0.00 0.00 108694.94 13702.98 85315.96 00:09:46.026 [2024-12-08T18:27:03.956Z] =================================================================================================================== 00:09:46.026 [2024-12-08T18:27:03.956Z] Total : 9382.07 36.65 0.00 0.00 108694.94 13702.98 85315.96 00:09:46.026 { 00:09:46.026 "results": [ 00:09:46.026 { 00:09:46.026 "job": "NVMe0n1", 00:09:46.026 "core_mask": "0x1", 00:09:46.026 "workload": "verify", 00:09:46.026 "status": "finished", 00:09:46.026 "verify_range": { 00:09:46.026 "start": 0, 00:09:46.026 "length": 16384 00:09:46.026 }, 00:09:46.026 "queue_depth": 1024, 00:09:46.026 "io_size": 4096, 00:09:46.026 "runtime": 10.062921, 00:09:46.026 "iops": 9382.0670956276, 00:09:46.026 "mibps": 36.648699592295316, 00:09:46.026 "io_failed": 0, 00:09:46.026 "io_timeout": 0, 00:09:46.026 "avg_latency_us": 108694.93634243314, 00:09:46.026 "min_latency_us": 13702.981818181817, 00:09:46.026 "max_latency_us": 85315.95636363636 00:09:46.026 } 00:09:46.026 ], 00:09:46.026 "core_count": 1 00:09:46.026 } 00:09:46.026 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 76435 00:09:46.026 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 76435 ']' 00:09:46.026 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 76435 00:09:46.026 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:46.026 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:46.026 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76435 00:09:46.026 killing process with pid 76435 00:09:46.026 Received shutdown signal, test time was about 10.000000 seconds 00:09:46.026 00:09:46.026 Latency(us) 00:09:46.026 [2024-12-08T18:27:03.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.026 [2024-12-08T18:27:03.956Z] =================================================================================================================== 00:09:46.026 [2024-12-08T18:27:03.956Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:46.026 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:46.026 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:46.026 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76435' 00:09:46.027 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 76435 00:09:46.027 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 76435 00:09:46.027 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:46.027 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:46.027 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:46.027 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:46.027 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:46.027 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:46.027 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:46.027 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:46.027 rmmod nvme_tcp 00:09:46.027 rmmod nvme_fabrics 00:09:46.027 rmmod nvme_keyring 00:09:46.027 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:46.286 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:46.286 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:46.286 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 76402 ']' 00:09:46.286 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 76402 00:09:46.286 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 76402 ']' 00:09:46.286 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 76402 00:09:46.286 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:46.286 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:46.286 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76402 00:09:46.286 killing process with pid 76402 00:09:46.286 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:46.286 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:46.286 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76402' 00:09:46.286 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 76402 00:09:46.286 18:27:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 76402 00:09:46.286 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:46.286 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:46.286 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:46.286 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:46.286 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:09:46.286 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:46.286 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:09:46.286 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:46.286 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:46.286 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:46.544 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:46.544 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:46.544 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:46.544 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:46.544 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:46.544 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:46.544 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:46.544 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:46.544 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:46.544 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:46.544 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:46.544 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:46.544 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:46.544 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.544 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.544 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.544 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:09:46.544 00:09:46.544 real 0m13.136s 00:09:46.544 user 0m21.920s 00:09:46.544 sys 0m2.293s 00:09:46.544 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.544 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.544 ************************************ 00:09:46.544 END TEST nvmf_queue_depth 00:09:46.544 ************************************ 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:46.804 ************************************ 00:09:46.804 START TEST nvmf_target_multipath 00:09:46.804 ************************************ 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:46.804 * Looking for test storage... 00:09:46.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:46.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.804 --rc genhtml_branch_coverage=1 00:09:46.804 --rc genhtml_function_coverage=1 00:09:46.804 --rc genhtml_legend=1 00:09:46.804 --rc geninfo_all_blocks=1 00:09:46.804 --rc geninfo_unexecuted_blocks=1 00:09:46.804 00:09:46.804 ' 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:46.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.804 --rc genhtml_branch_coverage=1 00:09:46.804 --rc genhtml_function_coverage=1 00:09:46.804 --rc genhtml_legend=1 00:09:46.804 --rc geninfo_all_blocks=1 00:09:46.804 --rc geninfo_unexecuted_blocks=1 00:09:46.804 00:09:46.804 ' 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:46.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.804 --rc genhtml_branch_coverage=1 00:09:46.804 --rc genhtml_function_coverage=1 00:09:46.804 --rc genhtml_legend=1 00:09:46.804 --rc geninfo_all_blocks=1 00:09:46.804 --rc geninfo_unexecuted_blocks=1 00:09:46.804 00:09:46.804 ' 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:46.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.804 --rc genhtml_branch_coverage=1 00:09:46.804 --rc genhtml_function_coverage=1 00:09:46.804 --rc genhtml_legend=1 00:09:46.804 --rc geninfo_all_blocks=1 00:09:46.804 --rc geninfo_unexecuted_blocks=1 00:09:46.804 00:09:46.804 ' 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.804 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.805 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:46.805 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.805 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:46.805 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:47.065 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:47.065 Cannot find device "nvmf_init_br" 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:47.065 Cannot find device "nvmf_init_br2" 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:47.065 Cannot find device "nvmf_tgt_br" 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:47.065 Cannot find device "nvmf_tgt_br2" 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:47.065 Cannot find device "nvmf_init_br" 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:47.065 Cannot find device "nvmf_init_br2" 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:47.065 Cannot find device "nvmf_tgt_br" 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:09:47.065 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:47.065 Cannot find device "nvmf_tgt_br2" 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:47.066 Cannot find device "nvmf_br" 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:47.066 Cannot find device "nvmf_init_if" 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:47.066 Cannot find device "nvmf_init_if2" 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:47.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:47.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:47.066 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:47.325 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:47.325 18:27:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:47.325 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:47.325 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:47.325 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:47.325 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:47.325 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:47.325 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:47.325 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:47.325 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:47.325 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:47.325 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:47.325 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:47.325 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:47.325 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:47.325 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:09:47.325 00:09:47.325 --- 10.0.0.3 ping statistics --- 00:09:47.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.325 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:47.325 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:47.325 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:47.325 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:09:47.325 00:09:47.325 --- 10.0.0.4 ping statistics --- 00:09:47.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.325 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:09:47.325 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:47.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:47.326 00:09:47.326 --- 10.0.0.1 ping statistics --- 00:09:47.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.326 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:47.326 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:47.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:09:47.326 00:09:47.326 --- 10.0.0.2 ping statistics --- 00:09:47.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.326 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:09:47.326 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.326 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # return 0 00:09:47.326 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:47.326 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.326 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:47.326 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:47.326 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.326 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:47.326 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:47.326 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:09:47.326 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:47.326 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:47.326 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:47.326 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:47.326 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:47.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.326 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@505 -- # nvmfpid=76806 00:09:47.326 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:47.326 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@506 -- # waitforlisten 76806 00:09:47.326 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 76806 ']' 00:09:47.326 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.326 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:47.326 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.326 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:47.326 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:47.326 [2024-12-08 18:27:05.178624] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:47.326 [2024-12-08 18:27:05.178893] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.584 [2024-12-08 18:27:05.319061] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:47.584 [2024-12-08 18:27:05.393347] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.584 [2024-12-08 18:27:05.393675] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.584 [2024-12-08 18:27:05.393853] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.585 [2024-12-08 18:27:05.394001] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.585 [2024-12-08 18:27:05.394051] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.585 [2024-12-08 18:27:05.394352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.585 [2024-12-08 18:27:05.394447] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.585 [2024-12-08 18:27:05.394529] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.585 [2024-12-08 18:27:05.394531] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.585 [2024-12-08 18:27:05.451370] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:47.843 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:47.843 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:09:47.843 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:47.843 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:47.843 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:47.843 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.843 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:48.101 [2024-12-08 18:27:05.854202] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.101 18:27:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:48.360 Malloc0 00:09:48.360 18:27:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:48.618 18:27:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:48.877 18:27:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:49.136 [2024-12-08 18:27:06.937286] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:49.136 18:27:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:09:49.395 [2024-12-08 18:27:07.173460] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:09:49.395 18:27:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:49.655 18:27:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:09:49.655 18:27:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:49.655 18:27:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:09:49.655 18:27:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:49.655 18:27:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:49.655 18:27:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:09:51.562 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:51.562 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:51.562 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:51.822 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:51.822 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:51.822 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:09:51.822 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:51.822 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:51.822 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:51.822 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:51.822 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:51.822 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:51.822 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:51.822 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:51.822 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:51.822 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:51.822 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:51.822 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:51.822 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:51.822 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:51.822 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:51.822 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:51.822 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:51.822 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:51.822 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:51.822 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:51.822 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:51.822 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:51.823 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:51.823 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:51.823 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:51.823 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:51.823 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=76888 00:09:51.823 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:51.823 18:27:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:51.823 [global] 00:09:51.823 thread=1 00:09:51.823 invalidate=1 00:09:51.823 rw=randrw 00:09:51.823 time_based=1 00:09:51.823 runtime=6 00:09:51.823 ioengine=libaio 00:09:51.823 direct=1 00:09:51.823 bs=4096 00:09:51.823 iodepth=128 00:09:51.823 norandommap=0 00:09:51.823 numjobs=1 00:09:51.823 00:09:51.823 verify_dump=1 00:09:51.823 verify_backlog=512 00:09:51.823 verify_state_save=0 00:09:51.823 do_verify=1 00:09:51.823 verify=crc32c-intel 00:09:51.823 [job0] 00:09:51.823 filename=/dev/nvme0n1 00:09:51.823 Could not set queue depth (nvme0n1) 00:09:51.823 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:51.823 fio-3.35 00:09:51.823 Starting 1 thread 00:09:52.761 18:27:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:53.020 18:27:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:53.280 18:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:53.280 18:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:53.280 18:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:53.280 18:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:53.280 18:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:53.280 18:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:53.280 18:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:53.280 18:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:53.280 18:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:53.280 18:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:53.280 18:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:53.280 18:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:53.280 18:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:53.539 18:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:53.799 18:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:53.799 18:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:53.799 18:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:53.799 18:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:53.799 18:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:53.799 18:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:53.799 18:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:53.799 18:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:53.799 18:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:53.799 18:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:53.799 18:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:53.799 18:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:53.799 18:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 76888 00:09:57.994 00:09:57.994 job0: (groupid=0, jobs=1): err= 0: pid=76915: Sun Dec 8 18:27:15 2024 00:09:57.994 read: IOPS=9592, BW=37.5MiB/s (39.3MB/s)(225MiB/6008msec) 00:09:57.994 slat (usec): min=7, max=9700, avg=62.94, stdev=246.62 00:09:57.994 clat (usec): min=2289, max=18636, avg=9136.09, stdev=1561.03 00:09:57.994 lat (usec): min=2300, max=18668, avg=9199.04, stdev=1565.46 00:09:57.994 clat percentiles (usec): 00:09:57.994 | 1.00th=[ 4817], 5.00th=[ 6980], 10.00th=[ 7701], 20.00th=[ 8225], 00:09:57.994 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9241], 00:09:57.994 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10421], 95.00th=[12518], 00:09:57.994 | 99.00th=[14353], 99.50th=[14746], 99.90th=[16319], 99.95th=[16909], 00:09:57.994 | 99.99th=[17957] 00:09:57.994 bw ( KiB/s): min= 4144, max=26416, per=51.10%, avg=19609.33, stdev=6941.31, samples=12 00:09:57.994 iops : min= 1036, max= 6604, avg=4902.33, stdev=1735.33, samples=12 00:09:57.994 write: IOPS=5832, BW=22.8MiB/s (23.9MB/s)(116MiB/5078msec); 0 zone resets 00:09:57.994 slat (usec): min=15, max=2260, avg=69.08, stdev=175.30 00:09:57.994 clat (usec): min=1864, max=18519, avg=7937.36, stdev=1463.12 00:09:57.994 lat (usec): min=1889, max=18543, avg=8006.44, stdev=1468.32 00:09:57.994 clat percentiles (usec): 00:09:57.994 | 1.00th=[ 3556], 5.00th=[ 4817], 10.00th=[ 6194], 20.00th=[ 7242], 00:09:57.994 | 30.00th=[ 7635], 40.00th=[ 7898], 50.00th=[ 8094], 60.00th=[ 8291], 00:09:57.994 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9241], 95.00th=[ 9634], 00:09:57.994 | 99.00th=[12518], 99.50th=[13042], 99.90th=[14615], 99.95th=[14877], 00:09:57.994 | 99.99th=[15795] 00:09:57.994 bw ( KiB/s): min= 4544, max=25744, per=84.47%, avg=19706.00, stdev=6765.06, samples=12 00:09:57.994 iops : min= 1136, max= 6436, avg=4926.50, stdev=1691.26, samples=12 00:09:57.994 lat (msec) : 2=0.01%, 4=0.86%, 10=87.87%, 20=11.26% 00:09:57.994 cpu : usr=4.88%, sys=20.18%, ctx=5038, majf=0, minf=127 00:09:57.994 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:57.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:57.994 issued rwts: total=57634,29615,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.994 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:57.994 00:09:57.994 Run status group 0 (all jobs): 00:09:57.994 READ: bw=37.5MiB/s (39.3MB/s), 37.5MiB/s-37.5MiB/s (39.3MB/s-39.3MB/s), io=225MiB (236MB), run=6008-6008msec 00:09:57.994 WRITE: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=116MiB (121MB), run=5078-5078msec 00:09:57.994 00:09:57.994 Disk stats (read/write): 00:09:57.994 nvme0n1: ios=56790/29022, merge=0/0, ticks=499217/217237, in_queue=716454, util=98.55% 00:09:57.994 18:27:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:58.252 18:27:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:09:58.819 18:27:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:58.819 18:27:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:58.819 18:27:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:58.819 18:27:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:58.819 18:27:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:58.819 18:27:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:58.819 18:27:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:58.819 18:27:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:58.819 18:27:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:58.819 18:27:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:58.819 18:27:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:58.819 18:27:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:58.819 18:27:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:58.819 18:27:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=76991 00:09:58.819 18:27:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:58.819 18:27:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:58.819 [global] 00:09:58.820 thread=1 00:09:58.820 invalidate=1 00:09:58.820 rw=randrw 00:09:58.820 time_based=1 00:09:58.820 runtime=6 00:09:58.820 ioengine=libaio 00:09:58.820 direct=1 00:09:58.820 bs=4096 00:09:58.820 iodepth=128 00:09:58.820 norandommap=0 00:09:58.820 numjobs=1 00:09:58.820 00:09:58.820 verify_dump=1 00:09:58.820 verify_backlog=512 00:09:58.820 verify_state_save=0 00:09:58.820 do_verify=1 00:09:58.820 verify=crc32c-intel 00:09:58.820 [job0] 00:09:58.820 filename=/dev/nvme0n1 00:09:58.820 Could not set queue depth (nvme0n1) 00:09:58.820 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:58.820 fio-3.35 00:09:58.820 Starting 1 thread 00:09:59.765 18:27:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:00.031 18:27:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:00.290 18:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:00.290 18:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:00.290 18:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:00.290 18:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:00.290 18:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:00.290 18:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:00.290 18:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:00.290 18:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:00.290 18:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:00.290 18:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:00.290 18:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:00.290 18:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:00.290 18:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:00.548 18:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:00.808 18:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:00.808 18:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:00.808 18:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:00.808 18:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:00.808 18:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:00.808 18:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:00.808 18:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:00.808 18:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:00.808 18:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:00.808 18:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:00.808 18:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:00.808 18:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:00.808 18:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 76991 00:10:05.003 00:10:05.003 job0: (groupid=0, jobs=1): err= 0: pid=77017: Sun Dec 8 18:27:22 2024 00:10:05.003 read: IOPS=11.2k, BW=43.9MiB/s (46.0MB/s)(264MiB/6006msec) 00:10:05.003 slat (usec): min=4, max=8278, avg=43.44, stdev=192.49 00:10:05.003 clat (usec): min=355, max=18183, avg=7824.52, stdev=2044.67 00:10:05.003 lat (usec): min=374, max=18193, avg=7867.96, stdev=2056.85 00:10:05.003 clat percentiles (usec): 00:10:05.003 | 1.00th=[ 3228], 5.00th=[ 4359], 10.00th=[ 5080], 20.00th=[ 6259], 00:10:05.003 | 30.00th=[ 7111], 40.00th=[ 7570], 50.00th=[ 7963], 60.00th=[ 8225], 00:10:05.003 | 70.00th=[ 8586], 80.00th=[ 9110], 90.00th=[10028], 95.00th=[11469], 00:10:05.003 | 99.00th=[13435], 99.50th=[14222], 99.90th=[16581], 99.95th=[17171], 00:10:05.003 | 99.99th=[17957] 00:10:05.003 bw ( KiB/s): min= 9712, max=36320, per=53.64%, avg=24101.09, stdev=7504.80, samples=11 00:10:05.003 iops : min= 2428, max= 9080, avg=6025.45, stdev=1876.20, samples=11 00:10:05.003 write: IOPS=6625, BW=25.9MiB/s (27.1MB/s)(139MiB/5385msec); 0 zone resets 00:10:05.003 slat (usec): min=11, max=1902, avg=56.40, stdev=144.61 00:10:05.003 clat (usec): min=852, max=16045, avg=6576.82, stdev=1798.87 00:10:05.003 lat (usec): min=880, max=16067, avg=6633.23, stdev=1813.40 00:10:05.003 clat percentiles (usec): 00:10:05.003 | 1.00th=[ 2769], 5.00th=[ 3425], 10.00th=[ 3884], 20.00th=[ 4621], 00:10:05.004 | 30.00th=[ 5800], 40.00th=[ 6652], 50.00th=[ 7046], 60.00th=[ 7308], 00:10:05.004 | 70.00th=[ 7570], 80.00th=[ 7963], 90.00th=[ 8455], 95.00th=[ 8979], 00:10:05.004 | 99.00th=[10814], 99.50th=[11731], 99.90th=[13435], 99.95th=[13960], 00:10:05.004 | 99.99th=[15139] 00:10:05.004 bw ( KiB/s): min= 9992, max=37256, per=91.00%, avg=24117.82, stdev=7402.53, samples=11 00:10:05.004 iops : min= 2498, max= 9314, avg=6029.45, stdev=1850.63, samples=11 00:10:05.004 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.03% 00:10:05.004 lat (msec) : 2=0.18%, 4=5.75%, 10=86.75%, 20=7.26% 00:10:05.004 cpu : usr=5.83%, sys=22.03%, ctx=5947, majf=0, minf=72 00:10:05.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:05.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:05.004 issued rwts: total=67460,35677,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:05.004 00:10:05.004 Run status group 0 (all jobs): 00:10:05.004 READ: bw=43.9MiB/s (46.0MB/s), 43.9MiB/s-43.9MiB/s (46.0MB/s-46.0MB/s), io=264MiB (276MB), run=6006-6006msec 00:10:05.004 WRITE: bw=25.9MiB/s (27.1MB/s), 25.9MiB/s-25.9MiB/s (27.1MB/s-27.1MB/s), io=139MiB (146MB), run=5385-5385msec 00:10:05.004 00:10:05.004 Disk stats (read/write): 00:10:05.004 nvme0n1: ios=66565/35066, merge=0/0, ticks=497054/215124, in_queue=712178, util=98.68% 00:10:05.004 18:27:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:05.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:05.004 18:27:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:05.004 18:27:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:10:05.004 18:27:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:05.004 18:27:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:05.004 18:27:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:05.004 18:27:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:05.004 18:27:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:10:05.004 18:27:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:05.262 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:05.262 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:05.262 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:05.262 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:05.262 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:05.262 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:05.521 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:05.521 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:05.521 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:05.521 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:05.521 rmmod nvme_tcp 00:10:05.521 rmmod nvme_fabrics 00:10:05.521 rmmod nvme_keyring 00:10:05.521 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:05.521 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:05.521 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:05.521 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n 76806 ']' 00:10:05.521 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # killprocess 76806 00:10:05.521 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 76806 ']' 00:10:05.521 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 76806 00:10:05.521 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:10:05.521 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:05.521 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76806 00:10:05.521 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:05.521 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:05.521 killing process with pid 76806 00:10:05.521 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76806' 00:10:05.521 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 76806 00:10:05.521 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 76806 00:10:05.780 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:05.780 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:05.780 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:05.780 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:05.780 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:10:05.780 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:05.780 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:10:05.780 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:05.780 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:05.780 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:05.780 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:05.780 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:05.780 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:06.039 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:06.039 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:06.039 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:06.039 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:06.039 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:06.039 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:06.039 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:06.039 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:06.039 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:06.039 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:06.040 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.040 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.040 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.040 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:10:06.040 00:10:06.040 real 0m19.393s 00:10:06.040 user 1m12.087s 00:10:06.040 sys 0m9.113s 00:10:06.040 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:06.040 ************************************ 00:10:06.040 END TEST nvmf_target_multipath 00:10:06.040 ************************************ 00:10:06.040 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:06.040 18:27:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:06.040 18:27:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:06.040 18:27:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:06.040 18:27:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:06.040 ************************************ 00:10:06.040 START TEST nvmf_zcopy 00:10:06.040 ************************************ 00:10:06.040 18:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:06.300 * Looking for test storage... 00:10:06.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:06.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.300 --rc genhtml_branch_coverage=1 00:10:06.300 --rc genhtml_function_coverage=1 00:10:06.300 --rc genhtml_legend=1 00:10:06.300 --rc geninfo_all_blocks=1 00:10:06.300 --rc geninfo_unexecuted_blocks=1 00:10:06.300 00:10:06.300 ' 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:06.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.300 --rc genhtml_branch_coverage=1 00:10:06.300 --rc genhtml_function_coverage=1 00:10:06.300 --rc genhtml_legend=1 00:10:06.300 --rc geninfo_all_blocks=1 00:10:06.300 --rc geninfo_unexecuted_blocks=1 00:10:06.300 00:10:06.300 ' 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:06.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.300 --rc genhtml_branch_coverage=1 00:10:06.300 --rc genhtml_function_coverage=1 00:10:06.300 --rc genhtml_legend=1 00:10:06.300 --rc geninfo_all_blocks=1 00:10:06.300 --rc geninfo_unexecuted_blocks=1 00:10:06.300 00:10:06.300 ' 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:06.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.300 --rc genhtml_branch_coverage=1 00:10:06.300 --rc genhtml_function_coverage=1 00:10:06.300 --rc genhtml_legend=1 00:10:06.300 --rc geninfo_all_blocks=1 00:10:06.300 --rc geninfo_unexecuted_blocks=1 00:10:06.300 00:10:06.300 ' 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.300 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:06.301 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:06.301 Cannot find device "nvmf_init_br" 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:06.301 Cannot find device "nvmf_init_br2" 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:06.301 Cannot find device "nvmf_tgt_br" 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:06.301 Cannot find device "nvmf_tgt_br2" 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:06.301 Cannot find device "nvmf_init_br" 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:06.301 Cannot find device "nvmf_init_br2" 00:10:06.301 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:10:06.560 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:06.560 Cannot find device "nvmf_tgt_br" 00:10:06.560 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:10:06.560 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:06.560 Cannot find device "nvmf_tgt_br2" 00:10:06.560 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:10:06.560 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:06.560 Cannot find device "nvmf_br" 00:10:06.560 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:10:06.560 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:06.560 Cannot find device "nvmf_init_if" 00:10:06.560 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:10:06.560 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:06.560 Cannot find device "nvmf_init_if2" 00:10:06.560 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:10:06.560 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:06.560 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:06.560 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:10:06.560 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:06.560 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:06.561 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:10:06.561 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:06.561 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:06.561 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:06.561 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:06.561 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:06.561 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:06.561 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:06.561 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:06.561 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:06.561 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:06.561 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:06.561 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:06.561 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:06.561 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:06.561 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:06.561 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:06.561 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:06.561 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:06.561 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:06.561 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:06.561 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:06.561 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:06.561 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:06.561 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:06.561 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:06.820 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:06.820 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:06.820 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:06.820 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:06.820 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:06.820 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:06.820 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:06.820 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:06.820 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:06.820 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:10:06.820 00:10:06.820 --- 10.0.0.3 ping statistics --- 00:10:06.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.820 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:10:06.820 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:06.820 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:06.820 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:10:06.820 00:10:06.820 --- 10.0.0.4 ping statistics --- 00:10:06.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.820 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:10:06.820 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:06.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:10:06.820 00:10:06.820 --- 10.0.0.1 ping statistics --- 00:10:06.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.821 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:10:06.821 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:06.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:10:06.821 00:10:06.821 --- 10.0.0.2 ping statistics --- 00:10:06.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.821 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:10:06.821 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.821 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # return 0 00:10:06.821 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:06.821 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.821 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:06.821 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:06.821 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.821 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:06.821 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:06.821 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:06.821 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:06.821 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:06.821 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.821 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:06.821 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=77320 00:10:06.821 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 77320 00:10:06.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.821 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 77320 ']' 00:10:06.821 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.821 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:06.821 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.821 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:06.821 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.821 [2024-12-08 18:27:24.613843] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:06.821 [2024-12-08 18:27:24.613926] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.821 [2024-12-08 18:27:24.744241] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.080 [2024-12-08 18:27:24.804459] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.080 [2024-12-08 18:27:24.804510] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.080 [2024-12-08 18:27:24.804536] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.080 [2024-12-08 18:27:24.804543] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.080 [2024-12-08 18:27:24.804549] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.080 [2024-12-08 18:27:24.804575] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.080 [2024-12-08 18:27:24.857172] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:07.080 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:07.080 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:07.080 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:07.081 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:07.081 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.081 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.081 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:07.081 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:07.081 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.081 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.081 [2024-12-08 18:27:24.962797] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.081 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.081 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:07.081 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.081 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.081 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.081 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:07.081 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.081 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.081 [2024-12-08 18:27:24.978870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:07.081 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.081 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:07.081 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.081 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.081 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.081 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:07.081 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.081 18:27:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.340 malloc0 00:10:07.340 18:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.340 18:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:07.340 18:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.340 18:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.340 18:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.340 18:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:07.340 18:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:07.340 18:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:07.340 18:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:07.340 18:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:07.340 18:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:07.340 { 00:10:07.340 "params": { 00:10:07.340 "name": "Nvme$subsystem", 00:10:07.340 "trtype": "$TEST_TRANSPORT", 00:10:07.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:07.340 "adrfam": "ipv4", 00:10:07.340 "trsvcid": "$NVMF_PORT", 00:10:07.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:07.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:07.340 "hdgst": ${hdgst:-false}, 00:10:07.340 "ddgst": ${ddgst:-false} 00:10:07.340 }, 00:10:07.340 "method": "bdev_nvme_attach_controller" 00:10:07.340 } 00:10:07.340 EOF 00:10:07.340 )") 00:10:07.340 18:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:07.340 18:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:07.340 18:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:07.340 18:27:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:07.340 "params": { 00:10:07.340 "name": "Nvme1", 00:10:07.340 "trtype": "tcp", 00:10:07.340 "traddr": "10.0.0.3", 00:10:07.340 "adrfam": "ipv4", 00:10:07.340 "trsvcid": "4420", 00:10:07.340 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:07.340 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:07.340 "hdgst": false, 00:10:07.340 "ddgst": false 00:10:07.340 }, 00:10:07.340 "method": "bdev_nvme_attach_controller" 00:10:07.340 }' 00:10:07.340 [2024-12-08 18:27:25.096193] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:07.341 [2024-12-08 18:27:25.096284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77346 ] 00:10:07.341 [2024-12-08 18:27:25.238511] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.599 [2024-12-08 18:27:25.300591] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.599 [2024-12-08 18:27:25.364073] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:07.599 Running I/O for 10 seconds... 00:10:09.910 6599.00 IOPS, 51.55 MiB/s [2024-12-08T18:27:28.776Z] 6642.50 IOPS, 51.89 MiB/s [2024-12-08T18:27:29.713Z] 6718.67 IOPS, 52.49 MiB/s [2024-12-08T18:27:30.646Z] 6715.50 IOPS, 52.46 MiB/s [2024-12-08T18:27:31.578Z] 6724.00 IOPS, 52.53 MiB/s [2024-12-08T18:27:32.520Z] 6773.67 IOPS, 52.92 MiB/s [2024-12-08T18:27:33.899Z] 6800.57 IOPS, 53.13 MiB/s [2024-12-08T18:27:34.836Z] 6812.12 IOPS, 53.22 MiB/s [2024-12-08T18:27:35.771Z] 6800.78 IOPS, 53.13 MiB/s [2024-12-08T18:27:35.771Z] 6810.10 IOPS, 53.20 MiB/s 00:10:17.841 Latency(us) 00:10:17.841 [2024-12-08T18:27:35.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.841 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:17.841 Verification LBA range: start 0x0 length 0x1000 00:10:17.841 Nvme1n1 : 10.02 6810.93 53.21 0.00 0.00 18733.26 2621.44 34078.72 00:10:17.841 [2024-12-08T18:27:35.771Z] =================================================================================================================== 00:10:17.841 [2024-12-08T18:27:35.771Z] Total : 6810.93 53.21 0.00 0.00 18733.26 2621.44 34078.72 00:10:17.841 18:27:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=77463 00:10:17.841 18:27:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:17.841 18:27:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.841 18:27:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:17.841 18:27:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:17.841 18:27:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:17.841 18:27:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:17.841 18:27:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:17.841 18:27:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:17.841 { 00:10:17.841 "params": { 00:10:17.841 "name": "Nvme$subsystem", 00:10:17.841 "trtype": "$TEST_TRANSPORT", 00:10:17.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:17.841 "adrfam": "ipv4", 00:10:17.841 "trsvcid": "$NVMF_PORT", 00:10:17.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:17.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:17.841 "hdgst": ${hdgst:-false}, 00:10:17.841 "ddgst": ${ddgst:-false} 00:10:17.841 }, 00:10:17.841 "method": "bdev_nvme_attach_controller" 00:10:17.841 } 00:10:17.841 EOF 00:10:17.841 )") 00:10:17.841 18:27:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:17.841 [2024-12-08 18:27:35.694217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.841 [2024-12-08 18:27:35.694392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.841 18:27:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:17.841 18:27:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:17.841 18:27:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:17.841 "params": { 00:10:17.841 "name": "Nvme1", 00:10:17.841 "trtype": "tcp", 00:10:17.841 "traddr": "10.0.0.3", 00:10:17.841 "adrfam": "ipv4", 00:10:17.841 "trsvcid": "4420", 00:10:17.841 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:17.841 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:17.841 "hdgst": false, 00:10:17.841 "ddgst": false 00:10:17.841 }, 00:10:17.841 "method": "bdev_nvme_attach_controller" 00:10:17.841 }' 00:10:17.841 [2024-12-08 18:27:35.702149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.841 [2024-12-08 18:27:35.702179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.841 [2024-12-08 18:27:35.710145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.841 [2024-12-08 18:27:35.710174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.841 [2024-12-08 18:27:35.722144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.841 [2024-12-08 18:27:35.722172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.841 [2024-12-08 18:27:35.734147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.841 [2024-12-08 18:27:35.734176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.841 [2024-12-08 18:27:35.746151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.841 [2024-12-08 18:27:35.746178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.841 [2024-12-08 18:27:35.752544] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:17.841 [2024-12-08 18:27:35.752667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77463 ] 00:10:17.841 [2024-12-08 18:27:35.758156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.841 [2024-12-08 18:27:35.758185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.100 [2024-12-08 18:27:35.770161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.100 [2024-12-08 18:27:35.770189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.100 [2024-12-08 18:27:35.782175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.100 [2024-12-08 18:27:35.782202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.100 [2024-12-08 18:27:35.794158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.100 [2024-12-08 18:27:35.794184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.100 [2024-12-08 18:27:35.806162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.100 [2024-12-08 18:27:35.806189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.100 [2024-12-08 18:27:35.818164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.100 [2024-12-08 18:27:35.818192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.100 [2024-12-08 18:27:35.830169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.100 [2024-12-08 18:27:35.830196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.100 [2024-12-08 18:27:35.842167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.100 [2024-12-08 18:27:35.842194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.100 [2024-12-08 18:27:35.854174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.100 [2024-12-08 18:27:35.854202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.100 [2024-12-08 18:27:35.866176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.100 [2024-12-08 18:27:35.866363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.100 [2024-12-08 18:27:35.878187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.100 [2024-12-08 18:27:35.878365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.100 [2024-12-08 18:27:35.890186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.100 [2024-12-08 18:27:35.890308] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.100 [2024-12-08 18:27:35.890390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.100 [2024-12-08 18:27:35.898214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.100 [2024-12-08 18:27:35.898455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.100 [2024-12-08 18:27:35.906200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.100 [2024-12-08 18:27:35.906400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.100 [2024-12-08 18:27:35.914193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.100 [2024-12-08 18:27:35.914373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.100 [2024-12-08 18:27:35.922192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.100 [2024-12-08 18:27:35.922367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.100 [2024-12-08 18:27:35.930202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.100 [2024-12-08 18:27:35.930392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.100 [2024-12-08 18:27:35.938209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.100 [2024-12-08 18:27:35.938447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.100 [2024-12-08 18:27:35.946201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.100 [2024-12-08 18:27:35.946361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.100 [2024-12-08 18:27:35.954218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.100 [2024-12-08 18:27:35.954392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.100 [2024-12-08 18:27:35.962222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.100 [2024-12-08 18:27:35.962396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.100 [2024-12-08 18:27:35.963621] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.100 [2024-12-08 18:27:35.970222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.101 [2024-12-08 18:27:35.970380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.101 [2024-12-08 18:27:35.978222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.101 [2024-12-08 18:27:35.978395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.101 [2024-12-08 18:27:35.986226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.101 [2024-12-08 18:27:35.986428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.101 [2024-12-08 18:27:35.994230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.101 [2024-12-08 18:27:35.994393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.101 [2024-12-08 18:27:36.002233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.101 [2024-12-08 18:27:36.002452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.101 [2024-12-08 18:27:36.010235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.101 [2024-12-08 18:27:36.010433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.101 [2024-12-08 18:27:36.018233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.101 [2024-12-08 18:27:36.018434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.101 [2024-12-08 18:27:36.024098] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:18.101 [2024-12-08 18:27:36.026237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.101 [2024-12-08 18:27:36.026390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.360 [2024-12-08 18:27:36.034238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.360 [2024-12-08 18:27:36.034437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.360 [2024-12-08 18:27:36.042241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.360 [2024-12-08 18:27:36.042442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.360 [2024-12-08 18:27:36.050243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.360 [2024-12-08 18:27:36.050443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.360 [2024-12-08 18:27:36.058242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.360 [2024-12-08 18:27:36.058438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.360 [2024-12-08 18:27:36.066257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.360 [2024-12-08 18:27:36.066451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.360 [2024-12-08 18:27:36.074261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.360 [2024-12-08 18:27:36.074448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.360 [2024-12-08 18:27:36.082265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.360 [2024-12-08 18:27:36.082451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.360 [2024-12-08 18:27:36.090268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.360 [2024-12-08 18:27:36.090454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.360 [2024-12-08 18:27:36.098274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.360 [2024-12-08 18:27:36.098307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.360 [2024-12-08 18:27:36.106277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.360 [2024-12-08 18:27:36.106309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.360 [2024-12-08 18:27:36.114283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.360 [2024-12-08 18:27:36.114475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.360 [2024-12-08 18:27:36.122311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.360 [2024-12-08 18:27:36.122481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.360 [2024-12-08 18:27:36.130291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.360 [2024-12-08 18:27:36.130485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.360 Running I/O for 5 seconds... 00:10:18.360 [2024-12-08 18:27:36.138305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.360 [2024-12-08 18:27:36.138477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.360 [2024-12-08 18:27:36.151338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.360 [2024-12-08 18:27:36.151541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.360 [2024-12-08 18:27:36.162504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.360 [2024-12-08 18:27:36.162677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.360 [2024-12-08 18:27:36.170935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.360 [2024-12-08 18:27:36.171118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.360 [2024-12-08 18:27:36.182825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.360 [2024-12-08 18:27:36.183010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.360 [2024-12-08 18:27:36.192382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.360 [2024-12-08 18:27:36.192583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.360 [2024-12-08 18:27:36.202193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.360 [2024-12-08 18:27:36.202377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.360 [2024-12-08 18:27:36.212145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.360 [2024-12-08 18:27:36.212329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.360 [2024-12-08 18:27:36.223053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.360 [2024-12-08 18:27:36.223262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.360 [2024-12-08 18:27:36.235357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.360 [2024-12-08 18:27:36.235579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.360 [2024-12-08 18:27:36.244711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.360 [2024-12-08 18:27:36.244913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.360 [2024-12-08 18:27:36.255157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.360 [2024-12-08 18:27:36.255342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.360 [2024-12-08 18:27:36.265571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.361 [2024-12-08 18:27:36.265735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.361 [2024-12-08 18:27:36.280253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.361 [2024-12-08 18:27:36.280481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-12-08 18:27:36.289381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-12-08 18:27:36.289602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-12-08 18:27:36.299714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-12-08 18:27:36.299896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-12-08 18:27:36.309070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-12-08 18:27:36.309256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-12-08 18:27:36.318768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-12-08 18:27:36.318950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-12-08 18:27:36.328075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-12-08 18:27:36.328258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-12-08 18:27:36.337580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-12-08 18:27:36.337751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-12-08 18:27:36.347201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-12-08 18:27:36.347236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-12-08 18:27:36.356664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-12-08 18:27:36.356869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-12-08 18:27:36.366483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-12-08 18:27:36.366518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-12-08 18:27:36.375464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-12-08 18:27:36.375499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-12-08 18:27:36.384709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-12-08 18:27:36.384760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-12-08 18:27:36.393801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-12-08 18:27:36.393835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-12-08 18:27:36.403043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-12-08 18:27:36.403077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-12-08 18:27:36.412433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-12-08 18:27:36.412606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-12-08 18:27:36.422353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-12-08 18:27:36.422552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-12-08 18:27:36.433033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-12-08 18:27:36.433228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-12-08 18:27:36.443595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-12-08 18:27:36.443805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-12-08 18:27:36.453812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-12-08 18:27:36.454032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-12-08 18:27:36.464546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-12-08 18:27:36.464737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-12-08 18:27:36.477049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-12-08 18:27:36.477238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-12-08 18:27:36.485821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-12-08 18:27:36.486006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-12-08 18:27:36.495866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-12-08 18:27:36.496060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-12-08 18:27:36.505277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-12-08 18:27:36.505493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-12-08 18:27:36.514707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.620 [2024-12-08 18:27:36.514907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.620 [2024-12-08 18:27:36.524268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.621 [2024-12-08 18:27:36.524461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.621 [2024-12-08 18:27:36.533567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.621 [2024-12-08 18:27:36.533736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.621 [2024-12-08 18:27:36.543154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.621 [2024-12-08 18:27:36.543336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-12-08 18:27:36.552858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-12-08 18:27:36.553040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-12-08 18:27:36.562510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-12-08 18:27:36.562680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-12-08 18:27:36.572209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-12-08 18:27:36.572391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-12-08 18:27:36.581660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-12-08 18:27:36.581862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-12-08 18:27:36.591266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-12-08 18:27:36.591464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-12-08 18:27:36.600741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-12-08 18:27:36.600955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-12-08 18:27:36.610109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-12-08 18:27:36.610293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-12-08 18:27:36.620048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-12-08 18:27:36.620234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-12-08 18:27:36.630018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-12-08 18:27:36.630069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-12-08 18:27:36.640150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-12-08 18:27:36.640203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-12-08 18:27:36.657978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-12-08 18:27:36.658091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-12-08 18:27:36.675031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-12-08 18:27:36.675307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-12-08 18:27:36.687886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-12-08 18:27:36.688094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-12-08 18:27:36.702419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-12-08 18:27:36.702594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-12-08 18:27:36.716370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-12-08 18:27:36.716584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-12-08 18:27:36.730178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-12-08 18:27:36.730373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-12-08 18:27:36.746747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-12-08 18:27:36.746947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-12-08 18:27:36.758518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-12-08 18:27:36.758665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-12-08 18:27:36.768564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-12-08 18:27:36.768717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-12-08 18:27:36.779985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-12-08 18:27:36.780156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-12-08 18:27:36.788203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-12-08 18:27:36.788358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.880 [2024-12-08 18:27:36.801053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.880 [2024-12-08 18:27:36.801268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.139 [2024-12-08 18:27:36.811630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.139 [2024-12-08 18:27:36.811856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.139 [2024-12-08 18:27:36.821539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.140 [2024-12-08 18:27:36.821694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.140 [2024-12-08 18:27:36.831080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.140 [2024-12-08 18:27:36.831238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.140 [2024-12-08 18:27:36.841133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.140 [2024-12-08 18:27:36.841277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.140 [2024-12-08 18:27:36.851154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.140 [2024-12-08 18:27:36.851314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.140 [2024-12-08 18:27:36.860769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.140 [2024-12-08 18:27:36.860912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.140 [2024-12-08 18:27:36.870498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.140 [2024-12-08 18:27:36.870529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.140 [2024-12-08 18:27:36.879805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.140 [2024-12-08 18:27:36.879837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.140 [2024-12-08 18:27:36.888935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.140 [2024-12-08 18:27:36.888965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.140 [2024-12-08 18:27:36.898727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.140 [2024-12-08 18:27:36.898758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.140 [2024-12-08 18:27:36.908411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.140 [2024-12-08 18:27:36.908449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.140 [2024-12-08 18:27:36.918991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.140 [2024-12-08 18:27:36.919024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.140 [2024-12-08 18:27:36.932508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.140 [2024-12-08 18:27:36.932540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.140 [2024-12-08 18:27:36.942784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.140 [2024-12-08 18:27:36.942818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.140 [2024-12-08 18:27:36.956659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.140 [2024-12-08 18:27:36.956689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.140 [2024-12-08 18:27:36.964959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.140 [2024-12-08 18:27:36.964989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.140 [2024-12-08 18:27:36.976378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.140 [2024-12-08 18:27:36.976420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.140 [2024-12-08 18:27:36.987573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.140 [2024-12-08 18:27:36.987603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.140 [2024-12-08 18:27:36.995168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.140 [2024-12-08 18:27:36.995198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.140 [2024-12-08 18:27:37.007180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.140 [2024-12-08 18:27:37.007211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.140 [2024-12-08 18:27:37.018074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.140 [2024-12-08 18:27:37.018263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.140 [2024-12-08 18:27:37.026218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.140 [2024-12-08 18:27:37.026249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.140 [2024-12-08 18:27:37.037960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.140 [2024-12-08 18:27:37.037991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.140 [2024-12-08 18:27:37.048837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.140 [2024-12-08 18:27:37.048999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.140 [2024-12-08 18:27:37.057535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.140 [2024-12-08 18:27:37.057585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.399 [2024-12-08 18:27:37.070245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.399 [2024-12-08 18:27:37.070279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.399 [2024-12-08 18:27:37.079642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.399 [2024-12-08 18:27:37.079698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.399 [2024-12-08 18:27:37.089218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.399 [2024-12-08 18:27:37.089250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.399 [2024-12-08 18:27:37.098796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.399 [2024-12-08 18:27:37.098959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.399 [2024-12-08 18:27:37.108486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.399 [2024-12-08 18:27:37.108516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.399 [2024-12-08 18:27:37.117833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.399 [2024-12-08 18:27:37.118008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.399 [2024-12-08 18:27:37.127375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.399 [2024-12-08 18:27:37.127418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.399 [2024-12-08 18:27:37.136551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.399 [2024-12-08 18:27:37.136582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.399 12362.00 IOPS, 96.58 MiB/s [2024-12-08T18:27:37.329Z] [2024-12-08 18:27:37.145663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.399 [2024-12-08 18:27:37.145694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.399 [2024-12-08 18:27:37.155000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.399 [2024-12-08 18:27:37.155030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.399 [2024-12-08 18:27:37.164082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.399 [2024-12-08 18:27:37.164247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.399 [2024-12-08 18:27:37.173234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.399 [2024-12-08 18:27:37.173265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.399 [2024-12-08 18:27:37.182093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.399 [2024-12-08 18:27:37.182124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.399 [2024-12-08 18:27:37.191685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.399 [2024-12-08 18:27:37.191730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.399 [2024-12-08 18:27:37.201264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.399 [2024-12-08 18:27:37.201296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.399 [2024-12-08 18:27:37.212724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.399 [2024-12-08 18:27:37.212757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.399 [2024-12-08 18:27:37.224293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.399 [2024-12-08 18:27:37.224324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.399 [2024-12-08 18:27:37.235188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.399 [2024-12-08 18:27:37.235345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.399 [2024-12-08 18:27:37.247156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.399 [2024-12-08 18:27:37.247187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.399 [2024-12-08 18:27:37.256033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.399 [2024-12-08 18:27:37.256064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.399 [2024-12-08 18:27:37.265939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.399 [2024-12-08 18:27:37.266095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.399 [2024-12-08 18:27:37.275499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.399 [2024-12-08 18:27:37.275531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.399 [2024-12-08 18:27:37.285301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.399 [2024-12-08 18:27:37.285475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.399 [2024-12-08 18:27:37.294923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.399 [2024-12-08 18:27:37.294953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.399 [2024-12-08 18:27:37.304462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.400 [2024-12-08 18:27:37.304490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.400 [2024-12-08 18:27:37.314337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.400 [2024-12-08 18:27:37.314387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.400 [2024-12-08 18:27:37.325033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.400 [2024-12-08 18:27:37.325066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.658 [2024-12-08 18:27:37.337191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.658 [2024-12-08 18:27:37.337224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.658 [2024-12-08 18:27:37.351772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.658 [2024-12-08 18:27:37.351937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.658 [2024-12-08 18:27:37.360635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.658 [2024-12-08 18:27:37.360666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.658 [2024-12-08 18:27:37.372920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.658 [2024-12-08 18:27:37.372951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.658 [2024-12-08 18:27:37.383733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.658 [2024-12-08 18:27:37.383765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.658 [2024-12-08 18:27:37.391908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.658 [2024-12-08 18:27:37.391940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.658 [2024-12-08 18:27:37.403403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.658 [2024-12-08 18:27:37.403584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.658 [2024-12-08 18:27:37.414612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.658 [2024-12-08 18:27:37.414761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.658 [2024-12-08 18:27:37.423063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.659 [2024-12-08 18:27:37.423094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.659 [2024-12-08 18:27:37.434586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.659 [2024-12-08 18:27:37.434616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.659 [2024-12-08 18:27:37.445605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.659 [2024-12-08 18:27:37.445765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.659 [2024-12-08 18:27:37.453522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.659 [2024-12-08 18:27:37.453552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.659 [2024-12-08 18:27:37.464499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.659 [2024-12-08 18:27:37.464527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.659 [2024-12-08 18:27:37.473186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.659 [2024-12-08 18:27:37.473218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.659 [2024-12-08 18:27:37.483188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.659 [2024-12-08 18:27:37.483218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.659 [2024-12-08 18:27:37.493903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.659 [2024-12-08 18:27:37.493936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.659 [2024-12-08 18:27:37.505065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.659 [2024-12-08 18:27:37.505097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.659 [2024-12-08 18:27:37.520674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.659 [2024-12-08 18:27:37.520705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.659 [2024-12-08 18:27:37.538688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.659 [2024-12-08 18:27:37.538844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.659 [2024-12-08 18:27:37.549046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.659 [2024-12-08 18:27:37.549079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.659 [2024-12-08 18:27:37.559785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.659 [2024-12-08 18:27:37.559819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.659 [2024-12-08 18:27:37.572248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.659 [2024-12-08 18:27:37.572282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.918 [2024-12-08 18:27:37.589957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.918 [2024-12-08 18:27:37.590159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.918 [2024-12-08 18:27:37.599758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.918 [2024-12-08 18:27:37.599793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.918 [2024-12-08 18:27:37.609739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.918 [2024-12-08 18:27:37.609770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.918 [2024-12-08 18:27:37.621263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.918 [2024-12-08 18:27:37.621294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.918 [2024-12-08 18:27:37.629242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.918 [2024-12-08 18:27:37.629273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.918 [2024-12-08 18:27:37.640754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.918 [2024-12-08 18:27:37.640785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.918 [2024-12-08 18:27:37.649860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.918 [2024-12-08 18:27:37.649889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.918 [2024-12-08 18:27:37.661439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.918 [2024-12-08 18:27:37.661479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.918 [2024-12-08 18:27:37.670995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.918 [2024-12-08 18:27:37.671026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.918 [2024-12-08 18:27:37.682755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.918 [2024-12-08 18:27:37.682788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.918 [2024-12-08 18:27:37.692650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.918 [2024-12-08 18:27:37.692697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.918 [2024-12-08 18:27:37.702795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.918 [2024-12-08 18:27:37.702826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.918 [2024-12-08 18:27:37.712516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.918 [2024-12-08 18:27:37.712545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.918 [2024-12-08 18:27:37.722112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.918 [2024-12-08 18:27:37.722147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.918 [2024-12-08 18:27:37.732386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.918 [2024-12-08 18:27:37.732575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.918 [2024-12-08 18:27:37.743843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.918 [2024-12-08 18:27:37.744009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.918 [2024-12-08 18:27:37.755159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.918 [2024-12-08 18:27:37.755313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.918 [2024-12-08 18:27:37.766375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.918 [2024-12-08 18:27:37.766594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.918 [2024-12-08 18:27:37.784161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.918 [2024-12-08 18:27:37.784300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.918 [2024-12-08 18:27:37.794433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.918 [2024-12-08 18:27:37.794576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.918 [2024-12-08 18:27:37.802906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.918 [2024-12-08 18:27:37.803048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.918 [2024-12-08 18:27:37.813860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.918 [2024-12-08 18:27:37.814013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.918 [2024-12-08 18:27:37.824938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.918 [2024-12-08 18:27:37.825092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.918 [2024-12-08 18:27:37.834209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.918 [2024-12-08 18:27:37.834352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.918 [2024-12-08 18:27:37.845509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.918 [2024-12-08 18:27:37.845658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.177 [2024-12-08 18:27:37.855494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.177 [2024-12-08 18:27:37.855641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.177 [2024-12-08 18:27:37.864653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.177 [2024-12-08 18:27:37.864802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.177 [2024-12-08 18:27:37.873520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.177 [2024-12-08 18:27:37.873661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.177 [2024-12-08 18:27:37.882356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.177 [2024-12-08 18:27:37.882540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.177 [2024-12-08 18:27:37.891383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.177 [2024-12-08 18:27:37.891553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.177 [2024-12-08 18:27:37.900192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.177 [2024-12-08 18:27:37.900332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.177 [2024-12-08 18:27:37.909045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.177 [2024-12-08 18:27:37.909183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.177 [2024-12-08 18:27:37.918110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.177 [2024-12-08 18:27:37.918252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.177 [2024-12-08 18:27:37.927816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.177 [2024-12-08 18:27:37.927981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.177 [2024-12-08 18:27:37.937459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.177 [2024-12-08 18:27:37.937598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.177 [2024-12-08 18:27:37.946643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.177 [2024-12-08 18:27:37.946784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.177 [2024-12-08 18:27:37.955506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.177 [2024-12-08 18:27:37.955645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.177 [2024-12-08 18:27:37.964875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.177 [2024-12-08 18:27:37.965011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.177 [2024-12-08 18:27:37.973830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.177 [2024-12-08 18:27:37.973973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.177 [2024-12-08 18:27:37.983117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.178 [2024-12-08 18:27:37.983261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.178 [2024-12-08 18:27:37.992063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.178 [2024-12-08 18:27:37.992225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.178 [2024-12-08 18:27:38.001001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.178 [2024-12-08 18:27:38.001144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.178 [2024-12-08 18:27:38.010027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.178 [2024-12-08 18:27:38.010152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.178 [2024-12-08 18:27:38.018646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.178 [2024-12-08 18:27:38.018801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.178 [2024-12-08 18:27:38.029169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.178 [2024-12-08 18:27:38.029202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.178 [2024-12-08 18:27:38.039781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.178 [2024-12-08 18:27:38.039932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.178 [2024-12-08 18:27:38.051908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.178 [2024-12-08 18:27:38.051953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.178 [2024-12-08 18:27:38.061787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.178 [2024-12-08 18:27:38.061818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.178 [2024-12-08 18:27:38.072909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.178 [2024-12-08 18:27:38.072940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.178 [2024-12-08 18:27:38.087856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.178 [2024-12-08 18:27:38.087888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.178 [2024-12-08 18:27:38.097661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.178 [2024-12-08 18:27:38.097692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.437 [2024-12-08 18:27:38.108336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.437 [2024-12-08 18:27:38.108386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.437 [2024-12-08 18:27:38.119054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.437 [2024-12-08 18:27:38.119219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.437 [2024-12-08 18:27:38.127399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.437 [2024-12-08 18:27:38.127586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.437 12683.00 IOPS, 99.09 MiB/s [2024-12-08T18:27:38.367Z] [2024-12-08 18:27:38.138325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.437 [2024-12-08 18:27:38.138480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.437 [2024-12-08 18:27:38.148833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.437 [2024-12-08 18:27:38.148975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.437 [2024-12-08 18:27:38.155982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.437 [2024-12-08 18:27:38.156152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.437 [2024-12-08 18:27:38.167767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.437 [2024-12-08 18:27:38.167913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.437 [2024-12-08 18:27:38.176165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.437 [2024-12-08 18:27:38.176307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.437 [2024-12-08 18:27:38.186039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.437 [2024-12-08 18:27:38.186181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.437 [2024-12-08 18:27:38.194044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.437 [2024-12-08 18:27:38.194186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.437 [2024-12-08 18:27:38.205014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.437 [2024-12-08 18:27:38.205163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.437 [2024-12-08 18:27:38.216795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.437 [2024-12-08 18:27:38.216937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.437 [2024-12-08 18:27:38.224857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.437 [2024-12-08 18:27:38.224999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.437 [2024-12-08 18:27:38.235844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.437 [2024-12-08 18:27:38.235990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.437 [2024-12-08 18:27:38.244483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.437 [2024-12-08 18:27:38.244625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.437 [2024-12-08 18:27:38.253502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.437 [2024-12-08 18:27:38.253645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.437 [2024-12-08 18:27:38.262434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.437 [2024-12-08 18:27:38.262575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.437 [2024-12-08 18:27:38.271193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.437 [2024-12-08 18:27:38.271333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.437 [2024-12-08 18:27:38.280340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.437 [2024-12-08 18:27:38.280499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.437 [2024-12-08 18:27:38.289289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.437 [2024-12-08 18:27:38.289455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.437 [2024-12-08 18:27:38.298347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.437 [2024-12-08 18:27:38.298502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.437 [2024-12-08 18:27:38.307513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.437 [2024-12-08 18:27:38.307666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.437 [2024-12-08 18:27:38.316929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.437 [2024-12-08 18:27:38.316976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.437 [2024-12-08 18:27:38.327919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.437 [2024-12-08 18:27:38.327952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.437 [2024-12-08 18:27:38.340370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.437 [2024-12-08 18:27:38.340415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.437 [2024-12-08 18:27:38.350764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.437 [2024-12-08 18:27:38.350943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.437 [2024-12-08 18:27:38.363240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.437 [2024-12-08 18:27:38.363291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.696 [2024-12-08 18:27:38.374496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.696 [2024-12-08 18:27:38.374530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.696 [2024-12-08 18:27:38.386074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.696 [2024-12-08 18:27:38.386249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.696 [2024-12-08 18:27:38.402493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.696 [2024-12-08 18:27:38.402523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.696 [2024-12-08 18:27:38.418841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.696 [2024-12-08 18:27:38.418873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.696 [2024-12-08 18:27:38.429855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.696 [2024-12-08 18:27:38.429886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.696 [2024-12-08 18:27:38.445795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.697 [2024-12-08 18:27:38.445826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.697 [2024-12-08 18:27:38.461982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.697 [2024-12-08 18:27:38.462013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.697 [2024-12-08 18:27:38.473010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.697 [2024-12-08 18:27:38.473165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.697 [2024-12-08 18:27:38.489189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.697 [2024-12-08 18:27:38.489221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.697 [2024-12-08 18:27:38.499928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.697 [2024-12-08 18:27:38.500094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.697 [2024-12-08 18:27:38.516556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.697 [2024-12-08 18:27:38.516700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.697 [2024-12-08 18:27:38.526954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.697 [2024-12-08 18:27:38.527105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.697 [2024-12-08 18:27:38.535123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.697 [2024-12-08 18:27:38.535265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.697 [2024-12-08 18:27:38.546541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.697 [2024-12-08 18:27:38.546684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.697 [2024-12-08 18:27:38.555877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.697 [2024-12-08 18:27:38.556039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.697 [2024-12-08 18:27:38.566683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.697 [2024-12-08 18:27:38.566830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.697 [2024-12-08 18:27:38.574894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.697 [2024-12-08 18:27:38.575048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.697 [2024-12-08 18:27:38.586314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.697 [2024-12-08 18:27:38.586478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.697 [2024-12-08 18:27:38.596581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.697 [2024-12-08 18:27:38.596728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.697 [2024-12-08 18:27:38.605116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.697 [2024-12-08 18:27:38.605275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.697 [2024-12-08 18:27:38.617793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.697 [2024-12-08 18:27:38.617945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.956 [2024-12-08 18:27:38.629684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.956 [2024-12-08 18:27:38.629860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.956 [2024-12-08 18:27:38.640856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.956 [2024-12-08 18:27:38.641039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.956 [2024-12-08 18:27:38.655974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.956 [2024-12-08 18:27:38.656187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.956 [2024-12-08 18:27:38.666139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.956 [2024-12-08 18:27:38.666284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.956 [2024-12-08 18:27:38.678056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.956 [2024-12-08 18:27:38.678201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.956 [2024-12-08 18:27:38.688991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.956 [2024-12-08 18:27:38.689168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.956 [2024-12-08 18:27:38.706995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.956 [2024-12-08 18:27:38.707169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.956 [2024-12-08 18:27:38.723206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.956 [2024-12-08 18:27:38.723351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.956 [2024-12-08 18:27:38.732364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.956 [2024-12-08 18:27:38.732521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.956 [2024-12-08 18:27:38.744560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.956 [2024-12-08 18:27:38.744591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.956 [2024-12-08 18:27:38.755235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.956 [2024-12-08 18:27:38.755266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.956 [2024-12-08 18:27:38.770499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.956 [2024-12-08 18:27:38.770529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.956 [2024-12-08 18:27:38.781559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.956 [2024-12-08 18:27:38.781711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.956 [2024-12-08 18:27:38.790318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.956 [2024-12-08 18:27:38.790349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.956 [2024-12-08 18:27:38.800342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.957 [2024-12-08 18:27:38.800373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.957 [2024-12-08 18:27:38.809595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.957 [2024-12-08 18:27:38.809625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.957 [2024-12-08 18:27:38.818889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.957 [2024-12-08 18:27:38.818919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.957 [2024-12-08 18:27:38.828197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.957 [2024-12-08 18:27:38.828227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.957 [2024-12-08 18:27:38.837451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.957 [2024-12-08 18:27:38.837480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.957 [2024-12-08 18:27:38.846879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.957 [2024-12-08 18:27:38.846910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.957 [2024-12-08 18:27:38.856237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.957 [2024-12-08 18:27:38.856267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.957 [2024-12-08 18:27:38.865900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.957 [2024-12-08 18:27:38.865950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.957 [2024-12-08 18:27:38.877805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.957 [2024-12-08 18:27:38.877840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.216 [2024-12-08 18:27:38.890017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.216 [2024-12-08 18:27:38.890082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.216 [2024-12-08 18:27:38.901464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.216 [2024-12-08 18:27:38.901653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.216 [2024-12-08 18:27:38.914083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.216 [2024-12-08 18:27:38.914114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.216 [2024-12-08 18:27:38.925302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.216 [2024-12-08 18:27:38.925469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.216 [2024-12-08 18:27:38.933958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.216 [2024-12-08 18:27:38.934098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.216 [2024-12-08 18:27:38.944560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.216 [2024-12-08 18:27:38.944706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.216 [2024-12-08 18:27:38.954111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.216 [2024-12-08 18:27:38.954267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.216 [2024-12-08 18:27:38.963984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.216 [2024-12-08 18:27:38.964174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.216 [2024-12-08 18:27:38.973708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.216 [2024-12-08 18:27:38.973863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.216 [2024-12-08 18:27:38.983920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.216 [2024-12-08 18:27:38.984147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.216 [2024-12-08 18:27:38.993987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.216 [2024-12-08 18:27:38.994134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.216 [2024-12-08 18:27:39.004246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.216 [2024-12-08 18:27:39.004390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.216 [2024-12-08 18:27:39.014054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.216 [2024-12-08 18:27:39.014211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.216 [2024-12-08 18:27:39.024321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.216 [2024-12-08 18:27:39.024548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.216 [2024-12-08 18:27:39.034182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.216 [2024-12-08 18:27:39.034324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.216 [2024-12-08 18:27:39.043719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.216 [2024-12-08 18:27:39.043872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.216 [2024-12-08 18:27:39.053388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.216 [2024-12-08 18:27:39.053554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.216 [2024-12-08 18:27:39.062787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.216 [2024-12-08 18:27:39.062945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.216 [2024-12-08 18:27:39.072337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.216 [2024-12-08 18:27:39.072494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.216 [2024-12-08 18:27:39.082053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.216 [2024-12-08 18:27:39.082197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.216 [2024-12-08 18:27:39.091735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.216 [2024-12-08 18:27:39.091886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.216 [2024-12-08 18:27:39.101040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.216 [2024-12-08 18:27:39.101162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.216 [2024-12-08 18:27:39.111564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.216 [2024-12-08 18:27:39.111776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.216 [2024-12-08 18:27:39.123745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.216 [2024-12-08 18:27:39.123891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.216 [2024-12-08 18:27:39.134969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.216 [2024-12-08 18:27:39.135168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.477 12785.33 IOPS, 99.89 MiB/s [2024-12-08T18:27:39.407Z] [2024-12-08 18:27:39.148111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.477 [2024-12-08 18:27:39.148331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.477 [2024-12-08 18:27:39.158365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.477 [2024-12-08 18:27:39.158533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.477 [2024-12-08 18:27:39.168266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.477 [2024-12-08 18:27:39.168297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.477 [2024-12-08 18:27:39.177807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.477 [2024-12-08 18:27:39.177968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.477 [2024-12-08 18:27:39.187306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.477 [2024-12-08 18:27:39.187338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.477 [2024-12-08 18:27:39.196910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.477 [2024-12-08 18:27:39.197060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.477 [2024-12-08 18:27:39.206766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.477 [2024-12-08 18:27:39.206919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.477 [2024-12-08 18:27:39.216890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.477 [2024-12-08 18:27:39.217052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.477 [2024-12-08 18:27:39.227066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.477 [2024-12-08 18:27:39.227213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.477 [2024-12-08 18:27:39.236520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.477 [2024-12-08 18:27:39.236665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.477 [2024-12-08 18:27:39.245716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.477 [2024-12-08 18:27:39.245871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.477 [2024-12-08 18:27:39.255481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.477 [2024-12-08 18:27:39.255626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.477 [2024-12-08 18:27:39.265285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.477 [2024-12-08 18:27:39.265450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.477 [2024-12-08 18:27:39.274980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.477 [2024-12-08 18:27:39.275137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.477 [2024-12-08 18:27:39.285021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.477 [2024-12-08 18:27:39.285181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.477 [2024-12-08 18:27:39.294395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.477 [2024-12-08 18:27:39.294553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.477 [2024-12-08 18:27:39.304282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.477 [2024-12-08 18:27:39.304446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.477 [2024-12-08 18:27:39.314137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.477 [2024-12-08 18:27:39.314291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.477 [2024-12-08 18:27:39.323920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.478 [2024-12-08 18:27:39.324082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.478 [2024-12-08 18:27:39.333342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.478 [2024-12-08 18:27:39.333498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.478 [2024-12-08 18:27:39.343339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.478 [2024-12-08 18:27:39.343456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.478 [2024-12-08 18:27:39.354789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.478 [2024-12-08 18:27:39.354945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.478 [2024-12-08 18:27:39.368147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.478 [2024-12-08 18:27:39.368297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.478 [2024-12-08 18:27:39.378321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.478 [2024-12-08 18:27:39.378493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.478 [2024-12-08 18:27:39.390091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.478 [2024-12-08 18:27:39.390279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.478 [2024-12-08 18:27:39.401435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.478 [2024-12-08 18:27:39.401641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.737 [2024-12-08 18:27:39.418805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.737 [2024-12-08 18:27:39.418966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.737 [2024-12-08 18:27:39.436827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.737 [2024-12-08 18:27:39.436984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.737 [2024-12-08 18:27:39.452464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.737 [2024-12-08 18:27:39.452510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.737 [2024-12-08 18:27:39.461619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.737 [2024-12-08 18:27:39.461650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.737 [2024-12-08 18:27:39.472994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.737 [2024-12-08 18:27:39.473024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.737 [2024-12-08 18:27:39.481975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.737 [2024-12-08 18:27:39.482006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.737 [2024-12-08 18:27:39.491622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.737 [2024-12-08 18:27:39.491830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.737 [2024-12-08 18:27:39.500644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.737 [2024-12-08 18:27:39.500675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.737 [2024-12-08 18:27:39.509807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.737 [2024-12-08 18:27:39.509838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.737 [2024-12-08 18:27:39.518859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.737 [2024-12-08 18:27:39.519020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.737 [2024-12-08 18:27:39.528360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.737 [2024-12-08 18:27:39.528392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.737 [2024-12-08 18:27:39.537769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.737 [2024-12-08 18:27:39.537929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.737 [2024-12-08 18:27:39.547558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.737 [2024-12-08 18:27:39.547588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.737 [2024-12-08 18:27:39.556933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.737 [2024-12-08 18:27:39.557097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.737 [2024-12-08 18:27:39.566566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.737 [2024-12-08 18:27:39.566596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.737 [2024-12-08 18:27:39.575574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.737 [2024-12-08 18:27:39.575619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.737 [2024-12-08 18:27:39.586410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.738 [2024-12-08 18:27:39.586467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.738 [2024-12-08 18:27:39.598813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.738 [2024-12-08 18:27:39.598971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.738 [2024-12-08 18:27:39.608769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.738 [2024-12-08 18:27:39.608808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.738 [2024-12-08 18:27:39.621536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.738 [2024-12-08 18:27:39.621566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.738 [2024-12-08 18:27:39.637140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.738 [2024-12-08 18:27:39.637289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.738 [2024-12-08 18:27:39.646258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.738 [2024-12-08 18:27:39.646290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.738 [2024-12-08 18:27:39.657891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.738 [2024-12-08 18:27:39.657925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.997 [2024-12-08 18:27:39.670748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.997 [2024-12-08 18:27:39.670784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.997 [2024-12-08 18:27:39.682020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.997 [2024-12-08 18:27:39.682067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.997 [2024-12-08 18:27:39.690325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.997 [2024-12-08 18:27:39.690357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.997 [2024-12-08 18:27:39.702672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.997 [2024-12-08 18:27:39.702705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.997 [2024-12-08 18:27:39.712979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.997 [2024-12-08 18:27:39.713041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.997 [2024-12-08 18:27:39.724325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.997 [2024-12-08 18:27:39.724504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.997 [2024-12-08 18:27:39.735733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.997 [2024-12-08 18:27:39.735880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.997 [2024-12-08 18:27:39.748238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.997 [2024-12-08 18:27:39.748383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.997 [2024-12-08 18:27:39.757860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.997 [2024-12-08 18:27:39.758027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.997 [2024-12-08 18:27:39.774130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.997 [2024-12-08 18:27:39.774274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.997 [2024-12-08 18:27:39.791521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.997 [2024-12-08 18:27:39.791667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.997 [2024-12-08 18:27:39.801162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.997 [2024-12-08 18:27:39.801316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.997 [2024-12-08 18:27:39.814886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.997 [2024-12-08 18:27:39.815028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.997 [2024-12-08 18:27:39.830379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.997 [2024-12-08 18:27:39.830546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.997 [2024-12-08 18:27:39.841686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.997 [2024-12-08 18:27:39.841836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.997 [2024-12-08 18:27:39.850056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.997 [2024-12-08 18:27:39.850199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.997 [2024-12-08 18:27:39.861343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.997 [2024-12-08 18:27:39.861499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.997 [2024-12-08 18:27:39.872667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.997 [2024-12-08 18:27:39.872806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.997 [2024-12-08 18:27:39.880517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.997 [2024-12-08 18:27:39.880693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.997 [2024-12-08 18:27:39.893208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.997 [2024-12-08 18:27:39.893366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.997 [2024-12-08 18:27:39.903271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.997 [2024-12-08 18:27:39.903440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.997 [2024-12-08 18:27:39.915896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.997 [2024-12-08 18:27:39.916077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-08 18:27:39.927602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-08 18:27:39.927816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-08 18:27:39.938020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-08 18:27:39.938165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-08 18:27:39.947935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-08 18:27:39.948126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-08 18:27:39.957338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-08 18:27:39.957514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-08 18:27:39.966993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-08 18:27:39.967136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-08 18:27:39.976587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-08 18:27:39.976732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-08 18:27:39.985985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-08 18:27:39.986141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-08 18:27:39.995707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-08 18:27:39.995875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-08 18:27:40.005499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-08 18:27:40.005645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-08 18:27:40.015011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-08 18:27:40.015041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-08 18:27:40.023963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-08 18:27:40.023995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-08 18:27:40.033112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-08 18:27:40.033261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-08 18:27:40.046546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-08 18:27:40.046578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-08 18:27:40.054900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-08 18:27:40.054931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-08 18:27:40.065956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-08 18:27:40.065987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-08 18:27:40.075361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-08 18:27:40.075392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-08 18:27:40.089619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-08 18:27:40.089648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-08 18:27:40.104565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-08 18:27:40.104594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-08 18:27:40.122085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-08 18:27:40.122128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-08 18:27:40.132094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-08 18:27:40.132272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 12776.75 IOPS, 99.82 MiB/s [2024-12-08T18:27:40.187Z] [2024-12-08 18:27:40.143805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-08 18:27:40.143838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-08 18:27:40.155891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-08 18:27:40.155925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-08 18:27:40.166762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-08 18:27:40.166963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-08 18:27:40.176586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-08 18:27:40.176617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.516 [2024-12-08 18:27:40.190604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.516 [2024-12-08 18:27:40.190651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.516 [2024-12-08 18:27:40.201823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.516 [2024-12-08 18:27:40.201855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.516 [2024-12-08 18:27:40.209872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.516 [2024-12-08 18:27:40.209903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.516 [2024-12-08 18:27:40.221249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.516 [2024-12-08 18:27:40.221281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.516 [2024-12-08 18:27:40.232401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.516 [2024-12-08 18:27:40.232590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.516 [2024-12-08 18:27:40.247411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.516 [2024-12-08 18:27:40.247454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.516 [2024-12-08 18:27:40.258258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.516 [2024-12-08 18:27:40.258290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.516 [2024-12-08 18:27:40.266297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.516 [2024-12-08 18:27:40.266327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.517 [2024-12-08 18:27:40.277570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.517 [2024-12-08 18:27:40.277600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.517 [2024-12-08 18:27:40.288471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.517 [2024-12-08 18:27:40.288501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.517 [2024-12-08 18:27:40.296330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.517 [2024-12-08 18:27:40.296490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.517 [2024-12-08 18:27:40.307467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.517 [2024-12-08 18:27:40.307496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.517 [2024-12-08 18:27:40.316226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.517 [2024-12-08 18:27:40.316256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.517 [2024-12-08 18:27:40.329378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.517 [2024-12-08 18:27:40.329434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.517 [2024-12-08 18:27:40.337601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.517 [2024-12-08 18:27:40.337631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.517 [2024-12-08 18:27:40.348851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.517 [2024-12-08 18:27:40.348882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.517 [2024-12-08 18:27:40.358565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.517 [2024-12-08 18:27:40.358599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.517 [2024-12-08 18:27:40.373795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.517 [2024-12-08 18:27:40.373988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.517 [2024-12-08 18:27:40.385658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.517 [2024-12-08 18:27:40.385715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.517 [2024-12-08 18:27:40.398795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.517 [2024-12-08 18:27:40.398828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.517 [2024-12-08 18:27:40.408414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.517 [2024-12-08 18:27:40.408455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.517 [2024-12-08 18:27:40.418191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.517 [2024-12-08 18:27:40.418221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.517 [2024-12-08 18:27:40.428102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.517 [2024-12-08 18:27:40.428152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.517 [2024-12-08 18:27:40.438456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.517 [2024-12-08 18:27:40.438488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.776 [2024-12-08 18:27:40.449100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.776 [2024-12-08 18:27:40.449273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.776 [2024-12-08 18:27:40.460822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.776 [2024-12-08 18:27:40.460854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.776 [2024-12-08 18:27:40.476799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.776 [2024-12-08 18:27:40.476830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.776 [2024-12-08 18:27:40.494080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.776 [2024-12-08 18:27:40.494110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.776 [2024-12-08 18:27:40.503321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.776 [2024-12-08 18:27:40.503352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.776 [2024-12-08 18:27:40.512726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.776 [2024-12-08 18:27:40.512757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.776 [2024-12-08 18:27:40.522235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.776 [2024-12-08 18:27:40.522266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.776 [2024-12-08 18:27:40.531683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.776 [2024-12-08 18:27:40.531860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.776 [2024-12-08 18:27:40.541385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.776 [2024-12-08 18:27:40.541427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.776 [2024-12-08 18:27:40.550570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.776 [2024-12-08 18:27:40.550599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.776 [2024-12-08 18:27:40.559775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.776 [2024-12-08 18:27:40.559807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.776 [2024-12-08 18:27:40.569178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.776 [2024-12-08 18:27:40.569208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.776 [2024-12-08 18:27:40.578530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.776 [2024-12-08 18:27:40.578559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.776 [2024-12-08 18:27:40.587655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.776 [2024-12-08 18:27:40.587709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.776 [2024-12-08 18:27:40.601518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.776 [2024-12-08 18:27:40.601550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.776 [2024-12-08 18:27:40.610598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.776 [2024-12-08 18:27:40.610627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.776 [2024-12-08 18:27:40.622783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.776 [2024-12-08 18:27:40.622818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.776 [2024-12-08 18:27:40.633905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.776 [2024-12-08 18:27:40.633938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.776 [2024-12-08 18:27:40.645529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.776 [2024-12-08 18:27:40.645559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.776 [2024-12-08 18:27:40.660766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.776 [2024-12-08 18:27:40.660934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.776 [2024-12-08 18:27:40.671470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.776 [2024-12-08 18:27:40.671502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.776 [2024-12-08 18:27:40.687386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.776 [2024-12-08 18:27:40.687445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.776 [2024-12-08 18:27:40.697277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.776 [2024-12-08 18:27:40.697311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.035 [2024-12-08 18:27:40.707953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.035 [2024-12-08 18:27:40.708162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.035 [2024-12-08 18:27:40.719920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.035 [2024-12-08 18:27:40.719965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.035 [2024-12-08 18:27:40.729080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.035 [2024-12-08 18:27:40.729112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.035 [2024-12-08 18:27:40.740259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.035 [2024-12-08 18:27:40.740291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.035 [2024-12-08 18:27:40.751296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.035 [2024-12-08 18:27:40.751476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.035 [2024-12-08 18:27:40.763347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.035 [2024-12-08 18:27:40.763379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.035 [2024-12-08 18:27:40.780457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.035 [2024-12-08 18:27:40.780512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.035 [2024-12-08 18:27:40.796143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.035 [2024-12-08 18:27:40.796174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.035 [2024-12-08 18:27:40.807157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.035 [2024-12-08 18:27:40.807187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.035 [2024-12-08 18:27:40.815770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.035 [2024-12-08 18:27:40.815803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.035 [2024-12-08 18:27:40.827459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.035 [2024-12-08 18:27:40.827488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.035 [2024-12-08 18:27:40.837258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.035 [2024-12-08 18:27:40.837288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.035 [2024-12-08 18:27:40.846650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.035 [2024-12-08 18:27:40.846681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.035 [2024-12-08 18:27:40.855924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.035 [2024-12-08 18:27:40.855965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.035 [2024-12-08 18:27:40.865271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.035 [2024-12-08 18:27:40.865303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.035 [2024-12-08 18:27:40.874616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.035 [2024-12-08 18:27:40.874647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.035 [2024-12-08 18:27:40.893146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.035 [2024-12-08 18:27:40.893178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.035 [2024-12-08 18:27:40.903769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.035 [2024-12-08 18:27:40.903799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.035 [2024-12-08 18:27:40.912585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.035 [2024-12-08 18:27:40.912618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.035 [2024-12-08 18:27:40.923669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.035 [2024-12-08 18:27:40.923723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.035 [2024-12-08 18:27:40.934230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.035 [2024-12-08 18:27:40.934451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.035 [2024-12-08 18:27:40.945147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.035 [2024-12-08 18:27:40.945178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.035 [2024-12-08 18:27:40.958227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.035 [2024-12-08 18:27:40.958379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.294 [2024-12-08 18:27:40.968492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.294 [2024-12-08 18:27:40.968537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.294 [2024-12-08 18:27:40.978702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.294 [2024-12-08 18:27:40.978734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.294 [2024-12-08 18:27:40.990176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.294 [2024-12-08 18:27:40.990208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.294 [2024-12-08 18:27:40.998389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.294 [2024-12-08 18:27:40.998430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.294 [2024-12-08 18:27:41.009907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.294 [2024-12-08 18:27:41.009938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.294 [2024-12-08 18:27:41.021111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.294 [2024-12-08 18:27:41.021142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.294 [2024-12-08 18:27:41.029469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.294 [2024-12-08 18:27:41.029499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.294 [2024-12-08 18:27:41.040892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.294 [2024-12-08 18:27:41.040923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.294 [2024-12-08 18:27:41.052250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.294 [2024-12-08 18:27:41.052424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.294 [2024-12-08 18:27:41.061143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.294 [2024-12-08 18:27:41.061174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.294 [2024-12-08 18:27:41.071019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.294 [2024-12-08 18:27:41.071050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.294 [2024-12-08 18:27:41.080362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.294 [2024-12-08 18:27:41.080538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.294 [2024-12-08 18:27:41.090093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.294 [2024-12-08 18:27:41.090125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.294 [2024-12-08 18:27:41.099964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.294 [2024-12-08 18:27:41.100148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.294 [2024-12-08 18:27:41.109614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.294 [2024-12-08 18:27:41.109645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.294 [2024-12-08 18:27:41.119418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.294 [2024-12-08 18:27:41.119585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.294 [2024-12-08 18:27:41.133357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.294 [2024-12-08 18:27:41.133382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.294 12800.40 IOPS, 100.00 MiB/s [2024-12-08T18:27:41.224Z] [2024-12-08 18:27:41.141104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.294 [2024-12-08 18:27:41.141133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.294 00:10:23.294 Latency(us) 00:10:23.294 [2024-12-08T18:27:41.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:23.294 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:23.294 Nvme1n1 : 5.00 12813.27 100.10 0.00 0.00 9982.71 3961.95 24903.68 00:10:23.294 [2024-12-08T18:27:41.224Z] =================================================================================================================== 00:10:23.294 [2024-12-08T18:27:41.224Z] Total : 12813.27 100.10 0.00 0.00 9982.71 3961.95 24903.68 00:10:23.294 [2024-12-08 18:27:41.152035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.295 [2024-12-08 18:27:41.152232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.295 [2024-12-08 18:27:41.160031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.295 [2024-12-08 18:27:41.160204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.295 [2024-12-08 18:27:41.168031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.295 [2024-12-08 18:27:41.168213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.295 [2024-12-08 18:27:41.176030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.295 [2024-12-08 18:27:41.176194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.295 [2024-12-08 18:27:41.184066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.295 [2024-12-08 18:27:41.184093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.295 [2024-12-08 18:27:41.192047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.295 [2024-12-08 18:27:41.192073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.295 [2024-12-08 18:27:41.200049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.295 [2024-12-08 18:27:41.200074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.295 [2024-12-08 18:27:41.208095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.295 [2024-12-08 18:27:41.208128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.295 [2024-12-08 18:27:41.216075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.295 [2024-12-08 18:27:41.216105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.553 [2024-12-08 18:27:41.228114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.553 [2024-12-08 18:27:41.228160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.553 [2024-12-08 18:27:41.236073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.553 [2024-12-08 18:27:41.236133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.553 [2024-12-08 18:27:41.248174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.553 [2024-12-08 18:27:41.248201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.553 [2024-12-08 18:27:41.256129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.553 [2024-12-08 18:27:41.256155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.553 [2024-12-08 18:27:41.264129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.553 [2024-12-08 18:27:41.264154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.553 [2024-12-08 18:27:41.272156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.554 [2024-12-08 18:27:41.272182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.554 [2024-12-08 18:27:41.280139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.554 [2024-12-08 18:27:41.280288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.554 [2024-12-08 18:27:41.288148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.554 [2024-12-08 18:27:41.288176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.554 [2024-12-08 18:27:41.296130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.554 [2024-12-08 18:27:41.296171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.554 [2024-12-08 18:27:41.304155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.554 [2024-12-08 18:27:41.304179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.554 [2024-12-08 18:27:41.312170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.554 [2024-12-08 18:27:41.312195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.554 [2024-12-08 18:27:41.320172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.554 [2024-12-08 18:27:41.320198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.554 [2024-12-08 18:27:41.328173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.554 [2024-12-08 18:27:41.328197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.554 [2024-12-08 18:27:41.336175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.554 [2024-12-08 18:27:41.336200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.554 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (77463) - No such process 00:10:23.554 18:27:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 77463 00:10:23.554 18:27:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.554 18:27:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.554 18:27:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:23.554 18:27:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.554 18:27:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:23.554 18:27:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.554 18:27:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:23.554 delay0 00:10:23.554 18:27:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.554 18:27:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:23.554 18:27:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.554 18:27:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:23.554 18:27:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.554 18:27:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:10:23.812 [2024-12-08 18:27:41.538181] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:30.394 Initializing NVMe Controllers 00:10:30.394 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:30.394 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:30.394 Initialization complete. Launching workers. 00:10:30.394 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 95 00:10:30.394 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 382, failed to submit 33 00:10:30.394 success 251, unsuccessful 131, failed 0 00:10:30.394 18:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:30.394 18:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:30.394 18:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:30.394 18:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:30.394 18:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:30.394 18:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:30.394 18:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:30.394 18:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:30.394 rmmod nvme_tcp 00:10:30.394 rmmod nvme_fabrics 00:10:30.394 rmmod nvme_keyring 00:10:30.394 18:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:30.394 18:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:30.394 18:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:30.394 18:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 77320 ']' 00:10:30.394 18:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 77320 00:10:30.394 18:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 77320 ']' 00:10:30.394 18:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 77320 00:10:30.394 18:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:30.394 18:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:30.394 18:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77320 00:10:30.394 killing process with pid 77320 00:10:30.394 18:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:30.394 18:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:30.394 18:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77320' 00:10:30.394 18:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 77320 00:10:30.394 18:27:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 77320 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:10:30.394 00:10:30.394 real 0m24.337s 00:10:30.394 user 0m39.258s 00:10:30.394 sys 0m7.170s 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:30.394 ************************************ 00:10:30.394 END TEST nvmf_zcopy 00:10:30.394 ************************************ 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.394 18:27:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:30.652 ************************************ 00:10:30.652 START TEST nvmf_nmic 00:10:30.652 ************************************ 00:10:30.652 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:30.652 * Looking for test storage... 00:10:30.652 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:30.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.653 --rc genhtml_branch_coverage=1 00:10:30.653 --rc genhtml_function_coverage=1 00:10:30.653 --rc genhtml_legend=1 00:10:30.653 --rc geninfo_all_blocks=1 00:10:30.653 --rc geninfo_unexecuted_blocks=1 00:10:30.653 00:10:30.653 ' 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:30.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.653 --rc genhtml_branch_coverage=1 00:10:30.653 --rc genhtml_function_coverage=1 00:10:30.653 --rc genhtml_legend=1 00:10:30.653 --rc geninfo_all_blocks=1 00:10:30.653 --rc geninfo_unexecuted_blocks=1 00:10:30.653 00:10:30.653 ' 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:30.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.653 --rc genhtml_branch_coverage=1 00:10:30.653 --rc genhtml_function_coverage=1 00:10:30.653 --rc genhtml_legend=1 00:10:30.653 --rc geninfo_all_blocks=1 00:10:30.653 --rc geninfo_unexecuted_blocks=1 00:10:30.653 00:10:30.653 ' 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:30.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.653 --rc genhtml_branch_coverage=1 00:10:30.653 --rc genhtml_function_coverage=1 00:10:30.653 --rc genhtml_legend=1 00:10:30.653 --rc geninfo_all_blocks=1 00:10:30.653 --rc geninfo_unexecuted_blocks=1 00:10:30.653 00:10:30.653 ' 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.653 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:30.654 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:30.654 Cannot find device "nvmf_init_br" 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:30.654 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:30.654 Cannot find device "nvmf_init_br2" 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:30.913 Cannot find device "nvmf_tgt_br" 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:30.913 Cannot find device "nvmf_tgt_br2" 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:30.913 Cannot find device "nvmf_init_br" 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:30.913 Cannot find device "nvmf_init_br2" 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:30.913 Cannot find device "nvmf_tgt_br" 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:30.913 Cannot find device "nvmf_tgt_br2" 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:30.913 Cannot find device "nvmf_br" 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:30.913 Cannot find device "nvmf_init_if" 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:30.913 Cannot find device "nvmf_init_if2" 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:30.913 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:30.913 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:30.913 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:31.172 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:31.172 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:31.172 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:31.172 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:31.172 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:31.172 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:31.172 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:31.172 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:31.172 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:31.172 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:31.172 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:31.172 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:31.172 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:31.172 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:31.172 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:31.172 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:10:31.172 00:10:31.172 --- 10.0.0.3 ping statistics --- 00:10:31.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.172 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:10:31.172 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:31.172 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:31.172 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.119 ms 00:10:31.172 00:10:31.172 --- 10.0.0.4 ping statistics --- 00:10:31.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.172 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:10:31.172 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:31.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:31.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:10:31.172 00:10:31.172 --- 10.0.0.1 ping statistics --- 00:10:31.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.172 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:10:31.172 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:31.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:31.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:10:31.172 00:10:31.172 --- 10.0.0.2 ping statistics --- 00:10:31.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.172 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:31.172 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:31.172 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # return 0 00:10:31.172 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:31.172 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:31.172 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:31.173 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:31.173 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:31.173 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:31.173 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:31.173 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:31.173 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:31.173 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:31.173 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.173 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=77844 00:10:31.173 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:31.173 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 77844 00:10:31.173 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 77844 ']' 00:10:31.173 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.173 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:31.173 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.173 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:31.173 18:27:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.173 [2024-12-08 18:27:49.040552] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:31.173 [2024-12-08 18:27:49.040656] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.431 [2024-12-08 18:27:49.182872] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:31.431 [2024-12-08 18:27:49.261563] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:31.431 [2024-12-08 18:27:49.261837] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:31.431 [2024-12-08 18:27:49.261949] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:31.431 [2024-12-08 18:27:49.262046] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:31.431 [2024-12-08 18:27:49.262148] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:31.431 [2024-12-08 18:27:49.262312] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.431 [2024-12-08 18:27:49.262979] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:31.431 [2024-12-08 18:27:49.263156] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:31.431 [2024-12-08 18:27:49.263255] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.431 [2024-12-08 18:27:49.320007] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:31.689 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:31.689 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:31.689 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:31.689 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:31.689 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.689 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:31.689 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:31.689 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.689 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.689 [2024-12-08 18:27:49.450140] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:31.689 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.689 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:31.689 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.689 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.689 Malloc0 00:10:31.689 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.689 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:31.689 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.689 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.689 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.689 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:31.689 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.689 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.689 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.689 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:31.689 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.689 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.689 [2024-12-08 18:27:49.509092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:31.689 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.689 test case1: single bdev can't be used in multiple subsystems 00:10:31.690 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:31.690 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:31.690 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.690 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.690 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.690 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:10:31.690 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.690 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.690 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.690 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:31.690 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:31.690 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.690 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.690 [2024-12-08 18:27:49.532906] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:31.690 [2024-12-08 18:27:49.533109] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:31.690 [2024-12-08 18:27:49.533218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.690 request: 00:10:31.690 { 00:10:31.690 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:31.690 "namespace": { 00:10:31.690 "bdev_name": "Malloc0", 00:10:31.690 "no_auto_visible": false 00:10:31.690 }, 00:10:31.690 "method": "nvmf_subsystem_add_ns", 00:10:31.690 "req_id": 1 00:10:31.690 } 00:10:31.690 Got JSON-RPC error response 00:10:31.690 response: 00:10:31.690 { 00:10:31.690 "code": -32602, 00:10:31.690 "message": "Invalid parameters" 00:10:31.690 } 00:10:31.690 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:31.690 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:31.690 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:31.690 Adding namespace failed - expected result. 00:10:31.690 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:31.690 test case2: host connect to nvmf target in multiple paths 00:10:31.690 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:31.690 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:10:31.690 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.690 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.690 [2024-12-08 18:27:49.544973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:10:31.690 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.690 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:31.948 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:10:31.948 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:31.948 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:31.948 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:31.948 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:31.948 18:27:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:34.479 18:27:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:34.479 18:27:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:34.479 18:27:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:34.479 18:27:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:34.479 18:27:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:34.479 18:27:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:34.479 18:27:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:34.479 [global] 00:10:34.479 thread=1 00:10:34.479 invalidate=1 00:10:34.479 rw=write 00:10:34.479 time_based=1 00:10:34.479 runtime=1 00:10:34.479 ioengine=libaio 00:10:34.479 direct=1 00:10:34.479 bs=4096 00:10:34.479 iodepth=1 00:10:34.479 norandommap=0 00:10:34.479 numjobs=1 00:10:34.479 00:10:34.479 verify_dump=1 00:10:34.479 verify_backlog=512 00:10:34.479 verify_state_save=0 00:10:34.479 do_verify=1 00:10:34.479 verify=crc32c-intel 00:10:34.479 [job0] 00:10:34.479 filename=/dev/nvme0n1 00:10:34.479 Could not set queue depth (nvme0n1) 00:10:34.479 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:34.479 fio-3.35 00:10:34.479 Starting 1 thread 00:10:35.414 00:10:35.414 job0: (groupid=0, jobs=1): err= 0: pid=77928: Sun Dec 8 18:27:53 2024 00:10:35.414 read: IOPS=2460, BW=9842KiB/s (10.1MB/s)(9852KiB/1001msec) 00:10:35.414 slat (nsec): min=12872, max=49204, avg=14856.04, stdev=2860.77 00:10:35.414 clat (usec): min=131, max=514, avg=204.18, stdev=30.14 00:10:35.414 lat (usec): min=146, max=529, avg=219.04, stdev=30.66 00:10:35.414 clat percentiles (usec): 00:10:35.414 | 1.00th=[ 149], 5.00th=[ 161], 10.00th=[ 169], 20.00th=[ 180], 00:10:35.414 | 30.00th=[ 188], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 210], 00:10:35.414 | 70.00th=[ 219], 80.00th=[ 227], 90.00th=[ 239], 95.00th=[ 251], 00:10:35.414 | 99.00th=[ 285], 99.50th=[ 318], 99.90th=[ 420], 99.95th=[ 437], 00:10:35.414 | 99.99th=[ 515] 00:10:35.414 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:35.414 slat (usec): min=17, max=101, avg=23.20, stdev= 6.23 00:10:35.414 clat (usec): min=89, max=2117, avg=153.34, stdev=69.64 00:10:35.414 lat (usec): min=110, max=2150, avg=176.54, stdev=72.45 00:10:35.414 clat percentiles (usec): 00:10:35.414 | 1.00th=[ 95], 5.00th=[ 101], 10.00th=[ 106], 20.00th=[ 116], 00:10:35.414 | 30.00th=[ 123], 40.00th=[ 129], 50.00th=[ 135], 60.00th=[ 143], 00:10:35.414 | 70.00th=[ 151], 80.00th=[ 178], 90.00th=[ 235], 95.00th=[ 265], 00:10:35.414 | 99.00th=[ 338], 99.50th=[ 383], 99.90th=[ 709], 99.95th=[ 963], 00:10:35.414 | 99.99th=[ 2114] 00:10:35.414 bw ( KiB/s): min=11392, max=11392, per=100.00%, avg=11392.00, stdev= 0.00, samples=1 00:10:35.414 iops : min= 2848, max= 2848, avg=2848.00, stdev= 0.00, samples=1 00:10:35.414 lat (usec) : 100=2.05%, 250=91.78%, 500=5.99%, 750=0.14%, 1000=0.02% 00:10:35.414 lat (msec) : 4=0.02% 00:10:35.414 cpu : usr=1.50%, sys=8.00%, ctx=5024, majf=0, minf=5 00:10:35.414 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.414 issued rwts: total=2463,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.414 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.414 00:10:35.414 Run status group 0 (all jobs): 00:10:35.414 READ: bw=9842KiB/s (10.1MB/s), 9842KiB/s-9842KiB/s (10.1MB/s-10.1MB/s), io=9852KiB (10.1MB), run=1001-1001msec 00:10:35.414 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:10:35.414 00:10:35.414 Disk stats (read/write): 00:10:35.414 nvme0n1: ios=2098/2519, merge=0/0, ticks=459/424, in_queue=883, util=91.78% 00:10:35.414 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:35.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:35.414 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:35.414 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:35.414 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:35.414 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:35.414 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:35.414 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:35.414 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:35.414 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:35.414 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:35.414 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:35.414 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:35.414 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:35.414 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:35.414 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:35.414 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:35.414 rmmod nvme_tcp 00:10:35.414 rmmod nvme_fabrics 00:10:35.414 rmmod nvme_keyring 00:10:35.414 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:35.414 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:35.414 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:35.414 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 77844 ']' 00:10:35.414 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 77844 00:10:35.414 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 77844 ']' 00:10:35.414 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 77844 00:10:35.414 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:35.414 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:35.414 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77844 00:10:35.674 killing process with pid 77844 00:10:35.674 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:35.674 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:35.674 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77844' 00:10:35.674 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 77844 00:10:35.674 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 77844 00:10:35.934 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:35.934 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:35.934 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:35.934 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:35.934 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:10:35.934 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:35.934 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:10:35.934 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:35.934 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:35.934 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:35.934 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:35.934 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:35.934 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:35.934 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:35.934 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:35.934 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:35.934 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:35.934 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:35.934 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:35.934 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:36.194 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:36.194 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:36.194 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:36.194 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.194 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.194 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.194 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:10:36.194 00:10:36.194 real 0m5.635s 00:10:36.194 user 0m16.512s 00:10:36.194 sys 0m2.108s 00:10:36.194 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:36.194 ************************************ 00:10:36.194 END TEST nvmf_nmic 00:10:36.194 ************************************ 00:10:36.194 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.194 18:27:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:36.194 18:27:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:36.194 18:27:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:36.194 18:27:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:36.194 ************************************ 00:10:36.194 START TEST nvmf_fio_target 00:10:36.194 ************************************ 00:10:36.194 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:36.194 * Looking for test storage... 00:10:36.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:36.194 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:36.194 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:10:36.194 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:36.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.455 --rc genhtml_branch_coverage=1 00:10:36.455 --rc genhtml_function_coverage=1 00:10:36.455 --rc genhtml_legend=1 00:10:36.455 --rc geninfo_all_blocks=1 00:10:36.455 --rc geninfo_unexecuted_blocks=1 00:10:36.455 00:10:36.455 ' 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:36.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.455 --rc genhtml_branch_coverage=1 00:10:36.455 --rc genhtml_function_coverage=1 00:10:36.455 --rc genhtml_legend=1 00:10:36.455 --rc geninfo_all_blocks=1 00:10:36.455 --rc geninfo_unexecuted_blocks=1 00:10:36.455 00:10:36.455 ' 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:36.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.455 --rc genhtml_branch_coverage=1 00:10:36.455 --rc genhtml_function_coverage=1 00:10:36.455 --rc genhtml_legend=1 00:10:36.455 --rc geninfo_all_blocks=1 00:10:36.455 --rc geninfo_unexecuted_blocks=1 00:10:36.455 00:10:36.455 ' 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:36.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.455 --rc genhtml_branch_coverage=1 00:10:36.455 --rc genhtml_function_coverage=1 00:10:36.455 --rc genhtml_legend=1 00:10:36.455 --rc geninfo_all_blocks=1 00:10:36.455 --rc geninfo_unexecuted_blocks=1 00:10:36.455 00:10:36.455 ' 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.455 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:36.456 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:36.456 Cannot find device "nvmf_init_br" 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:36.456 Cannot find device "nvmf_init_br2" 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:36.456 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:36.457 Cannot find device "nvmf_tgt_br" 00:10:36.457 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:10:36.457 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:36.457 Cannot find device "nvmf_tgt_br2" 00:10:36.457 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:10:36.457 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:36.457 Cannot find device "nvmf_init_br" 00:10:36.457 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:10:36.457 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:36.457 Cannot find device "nvmf_init_br2" 00:10:36.457 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:10:36.457 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:36.457 Cannot find device "nvmf_tgt_br" 00:10:36.457 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:10:36.457 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:36.457 Cannot find device "nvmf_tgt_br2" 00:10:36.457 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:10:36.457 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:36.457 Cannot find device "nvmf_br" 00:10:36.457 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:10:36.457 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:36.457 Cannot find device "nvmf_init_if" 00:10:36.457 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:10:36.457 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:36.457 Cannot find device "nvmf_init_if2" 00:10:36.457 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:10:36.457 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:36.457 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:36.457 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:10:36.457 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:36.457 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:36.457 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:10:36.457 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:36.457 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:36.716 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:36.716 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:36.716 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:36.716 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:36.716 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:36.716 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:36.716 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:36.716 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:36.716 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:36.716 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:36.716 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:36.716 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:36.716 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:36.716 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:36.716 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:36.716 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:36.716 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:36.716 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:36.716 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:36.716 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:36.716 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:36.716 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:36.716 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:36.716 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:36.976 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:36.976 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.112 ms 00:10:36.976 00:10:36.976 --- 10.0.0.3 ping statistics --- 00:10:36.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.976 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:36.976 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:36.976 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:10:36.976 00:10:36.976 --- 10.0.0.4 ping statistics --- 00:10:36.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.976 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:36.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:36.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:10:36.976 00:10:36.976 --- 10.0.0.1 ping statistics --- 00:10:36.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.976 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:36.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:36.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:10:36.976 00:10:36.976 --- 10.0.0.2 ping statistics --- 00:10:36.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.976 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # return 0 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=78163 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 78163 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 78163 ']' 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:36.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:36.976 18:27:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.976 [2024-12-08 18:27:54.769448] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:36.976 [2024-12-08 18:27:54.769552] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.235 [2024-12-08 18:27:54.909334] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.235 [2024-12-08 18:27:55.010404] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.236 [2024-12-08 18:27:55.010505] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.236 [2024-12-08 18:27:55.010528] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.236 [2024-12-08 18:27:55.010536] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.236 [2024-12-08 18:27:55.010544] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.236 [2024-12-08 18:27:55.010661] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.236 [2024-12-08 18:27:55.010757] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.236 [2024-12-08 18:27:55.010896] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.236 [2024-12-08 18:27:55.010901] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.236 [2024-12-08 18:27:55.084314] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:38.170 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:38.170 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:38.171 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:38.171 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:38.171 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.171 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:38.171 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:38.171 [2024-12-08 18:27:56.088115] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:38.429 18:27:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:38.687 18:27:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:38.687 18:27:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:38.945 18:27:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:38.945 18:27:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:39.204 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:39.204 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:39.772 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:39.772 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:40.030 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:40.289 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:40.289 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:40.547 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:40.547 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:40.806 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:40.806 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:41.065 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:41.324 18:27:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:41.324 18:27:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:41.583 18:27:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:41.583 18:27:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:41.842 18:27:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:42.100 [2024-12-08 18:27:59.944773] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:42.100 18:27:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:42.359 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:42.617 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:42.876 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:42.876 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:42.876 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:42.876 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:42.876 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:42.876 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:44.778 18:28:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:44.778 18:28:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:44.778 18:28:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:44.778 18:28:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:44.778 18:28:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:44.778 18:28:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:44.778 18:28:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:44.778 [global] 00:10:44.778 thread=1 00:10:44.778 invalidate=1 00:10:44.778 rw=write 00:10:44.778 time_based=1 00:10:44.778 runtime=1 00:10:44.778 ioengine=libaio 00:10:44.778 direct=1 00:10:44.778 bs=4096 00:10:44.778 iodepth=1 00:10:44.778 norandommap=0 00:10:44.778 numjobs=1 00:10:44.778 00:10:44.778 verify_dump=1 00:10:44.778 verify_backlog=512 00:10:44.778 verify_state_save=0 00:10:44.778 do_verify=1 00:10:44.778 verify=crc32c-intel 00:10:44.778 [job0] 00:10:44.778 filename=/dev/nvme0n1 00:10:44.778 [job1] 00:10:44.778 filename=/dev/nvme0n2 00:10:44.778 [job2] 00:10:44.778 filename=/dev/nvme0n3 00:10:44.778 [job3] 00:10:44.778 filename=/dev/nvme0n4 00:10:45.037 Could not set queue depth (nvme0n1) 00:10:45.037 Could not set queue depth (nvme0n2) 00:10:45.037 Could not set queue depth (nvme0n3) 00:10:45.037 Could not set queue depth (nvme0n4) 00:10:45.037 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.037 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.037 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.037 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.037 fio-3.35 00:10:45.037 Starting 4 threads 00:10:46.412 00:10:46.412 job0: (groupid=0, jobs=1): err= 0: pid=78358: Sun Dec 8 18:28:04 2024 00:10:46.412 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:46.412 slat (usec): min=7, max=189, avg=29.15, stdev=16.95 00:10:46.412 clat (usec): min=218, max=7828, avg=472.33, stdev=342.25 00:10:46.412 lat (usec): min=232, max=7845, avg=501.48, stdev=348.95 00:10:46.412 clat percentiles (usec): 00:10:46.412 | 1.00th=[ 235], 5.00th=[ 265], 10.00th=[ 289], 20.00th=[ 347], 00:10:46.412 | 30.00th=[ 388], 40.00th=[ 408], 50.00th=[ 424], 60.00th=[ 453], 00:10:46.412 | 70.00th=[ 486], 80.00th=[ 553], 90.00th=[ 676], 95.00th=[ 775], 00:10:46.412 | 99.00th=[ 898], 99.50th=[ 955], 99.90th=[ 6325], 99.95th=[ 7832], 00:10:46.412 | 99.99th=[ 7832] 00:10:46.412 write: IOPS=1337, BW=5351KiB/s (5479kB/s)(5356KiB/1001msec); 0 zone resets 00:10:46.412 slat (usec): min=11, max=117, avg=36.54, stdev=15.18 00:10:46.412 clat (usec): min=120, max=714, avg=320.66, stdev=144.13 00:10:46.412 lat (usec): min=138, max=762, avg=357.20, stdev=154.01 00:10:46.412 clat percentiles (usec): 00:10:46.412 | 1.00th=[ 137], 5.00th=[ 151], 10.00th=[ 163], 20.00th=[ 186], 00:10:46.412 | 30.00th=[ 215], 40.00th=[ 255], 50.00th=[ 293], 60.00th=[ 318], 00:10:46.412 | 70.00th=[ 379], 80.00th=[ 445], 90.00th=[ 570], 95.00th=[ 594], 00:10:46.412 | 99.00th=[ 660], 99.50th=[ 693], 99.90th=[ 709], 99.95th=[ 717], 00:10:46.412 | 99.99th=[ 717] 00:10:46.412 bw ( KiB/s): min= 4087, max= 4087, per=14.80%, avg=4087.00, stdev= 0.00, samples=1 00:10:46.412 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:10:46.412 lat (usec) : 250=23.66%, 500=55.48%, 750=18.37%, 1000=2.37% 00:10:46.412 lat (msec) : 4=0.04%, 10=0.08% 00:10:46.412 cpu : usr=1.70%, sys=6.50%, ctx=2373, majf=0, minf=13 00:10:46.412 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.412 issued rwts: total=1024,1339,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.412 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.412 job1: (groupid=0, jobs=1): err= 0: pid=78359: Sun Dec 8 18:28:04 2024 00:10:46.412 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:46.412 slat (nsec): min=10615, max=90780, avg=22345.20, stdev=9945.00 00:10:46.412 clat (usec): min=245, max=1185, avg=468.80, stdev=116.88 00:10:46.412 lat (usec): min=261, max=1202, avg=491.14, stdev=118.96 00:10:46.412 clat percentiles (usec): 00:10:46.412 | 1.00th=[ 269], 5.00th=[ 306], 10.00th=[ 334], 20.00th=[ 383], 00:10:46.412 | 30.00th=[ 408], 40.00th=[ 424], 50.00th=[ 445], 60.00th=[ 469], 00:10:46.412 | 70.00th=[ 498], 80.00th=[ 553], 90.00th=[ 660], 95.00th=[ 709], 00:10:46.412 | 99.00th=[ 775], 99.50th=[ 799], 99.90th=[ 906], 99.95th=[ 1188], 00:10:46.412 | 99.99th=[ 1188] 00:10:46.412 write: IOPS=1503, BW=6014KiB/s (6158kB/s)(6020KiB/1001msec); 0 zone resets 00:10:46.412 slat (usec): min=12, max=127, avg=26.82, stdev=11.00 00:10:46.412 clat (usec): min=122, max=569, avg=299.28, stdev=73.32 00:10:46.412 lat (usec): min=196, max=588, avg=326.10, stdev=77.73 00:10:46.412 clat percentiles (usec): 00:10:46.412 | 1.00th=[ 194], 5.00th=[ 206], 10.00th=[ 217], 20.00th=[ 235], 00:10:46.412 | 30.00th=[ 251], 40.00th=[ 269], 50.00th=[ 289], 60.00th=[ 310], 00:10:46.412 | 70.00th=[ 326], 80.00th=[ 351], 90.00th=[ 416], 95.00th=[ 449], 00:10:46.412 | 99.00th=[ 494], 99.50th=[ 515], 99.90th=[ 553], 99.95th=[ 570], 00:10:46.412 | 99.99th=[ 570] 00:10:46.412 bw ( KiB/s): min= 5109, max= 5109, per=18.50%, avg=5109.00, stdev= 0.00, samples=1 00:10:46.412 iops : min= 1277, max= 1277, avg=1277.00, stdev= 0.00, samples=1 00:10:46.412 lat (usec) : 250=17.64%, 500=69.91%, 750=11.66%, 1000=0.75% 00:10:46.412 lat (msec) : 2=0.04% 00:10:46.412 cpu : usr=1.60%, sys=5.30%, ctx=2529, majf=0, minf=8 00:10:46.412 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.412 issued rwts: total=1024,1505,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.412 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.412 job2: (groupid=0, jobs=1): err= 0: pid=78360: Sun Dec 8 18:28:04 2024 00:10:46.412 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:46.412 slat (nsec): min=9947, max=79597, avg=19008.54, stdev=7501.16 00:10:46.412 clat (usec): min=249, max=1117, avg=472.27, stdev=117.34 00:10:46.412 lat (usec): min=261, max=1129, avg=491.28, stdev=119.54 00:10:46.412 clat percentiles (usec): 00:10:46.412 | 1.00th=[ 273], 5.00th=[ 310], 10.00th=[ 338], 20.00th=[ 383], 00:10:46.412 | 30.00th=[ 412], 40.00th=[ 433], 50.00th=[ 449], 60.00th=[ 474], 00:10:46.412 | 70.00th=[ 506], 80.00th=[ 570], 90.00th=[ 652], 95.00th=[ 693], 00:10:46.412 | 99.00th=[ 783], 99.50th=[ 824], 99.90th=[ 938], 99.95th=[ 1123], 00:10:46.412 | 99.99th=[ 1123] 00:10:46.412 write: IOPS=1504, BW=6018KiB/s (6162kB/s)(6024KiB/1001msec); 0 zone resets 00:10:46.412 slat (nsec): min=15697, max=90402, avg=27070.54, stdev=8707.92 00:10:46.412 clat (usec): min=172, max=593, avg=298.96, stdev=77.68 00:10:46.412 lat (usec): min=196, max=615, avg=326.03, stdev=78.66 00:10:46.412 clat percentiles (usec): 00:10:46.412 | 1.00th=[ 188], 5.00th=[ 202], 10.00th=[ 212], 20.00th=[ 231], 00:10:46.412 | 30.00th=[ 245], 40.00th=[ 265], 50.00th=[ 285], 60.00th=[ 310], 00:10:46.412 | 70.00th=[ 326], 80.00th=[ 355], 90.00th=[ 424], 95.00th=[ 457], 00:10:46.412 | 99.00th=[ 506], 99.50th=[ 519], 99.90th=[ 570], 99.95th=[ 594], 00:10:46.412 | 99.99th=[ 594] 00:10:46.412 bw ( KiB/s): min= 5120, max= 5120, per=18.54%, avg=5120.00, stdev= 0.00, samples=1 00:10:46.412 iops : min= 1280, max= 1280, avg=1280.00, stdev= 0.00, samples=1 00:10:46.412 lat (usec) : 250=19.25%, 500=67.39%, 750=12.69%, 1000=0.63% 00:10:46.412 lat (msec) : 2=0.04% 00:10:46.412 cpu : usr=1.40%, sys=5.20%, ctx=2530, majf=0, minf=7 00:10:46.412 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.412 issued rwts: total=1024,1506,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.412 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.412 job3: (groupid=0, jobs=1): err= 0: pid=78361: Sun Dec 8 18:28:04 2024 00:10:46.412 read: IOPS=2061, BW=8248KiB/s (8446kB/s)(8256KiB/1001msec) 00:10:46.412 slat (nsec): min=7436, max=68913, avg=14391.34, stdev=5893.45 00:10:46.412 clat (usec): min=167, max=786, avg=235.29, stdev=43.63 00:10:46.412 lat (usec): min=180, max=799, avg=249.68, stdev=44.52 00:10:46.412 clat percentiles (usec): 00:10:46.412 | 1.00th=[ 178], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 204], 00:10:46.412 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 227], 60.00th=[ 235], 00:10:46.412 | 70.00th=[ 245], 80.00th=[ 260], 90.00th=[ 285], 95.00th=[ 310], 00:10:46.412 | 99.00th=[ 363], 99.50th=[ 404], 99.90th=[ 660], 99.95th=[ 717], 00:10:46.412 | 99.99th=[ 791] 00:10:46.412 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:46.412 slat (nsec): min=9480, max=76855, avg=20583.95, stdev=7608.88 00:10:46.412 clat (usec): min=106, max=308, avg=166.11, stdev=33.71 00:10:46.412 lat (usec): min=122, max=378, avg=186.69, stdev=33.92 00:10:46.412 clat percentiles (usec): 00:10:46.412 | 1.00th=[ 118], 5.00th=[ 125], 10.00th=[ 130], 20.00th=[ 139], 00:10:46.412 | 30.00th=[ 145], 40.00th=[ 151], 50.00th=[ 159], 60.00th=[ 167], 00:10:46.412 | 70.00th=[ 178], 80.00th=[ 192], 90.00th=[ 215], 95.00th=[ 233], 00:10:46.412 | 99.00th=[ 269], 99.50th=[ 281], 99.90th=[ 302], 99.95th=[ 302], 00:10:46.412 | 99.99th=[ 310] 00:10:46.412 bw ( KiB/s): min=10387, max=10387, per=37.62%, avg=10387.00, stdev= 0.00, samples=1 00:10:46.412 iops : min= 2596, max= 2596, avg=2596.00, stdev= 0.00, samples=1 00:10:46.412 lat (usec) : 250=87.22%, 500=12.67%, 750=0.09%, 1000=0.02% 00:10:46.412 cpu : usr=1.90%, sys=6.30%, ctx=4626, majf=0, minf=12 00:10:46.412 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.412 issued rwts: total=2064,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.412 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.412 00:10:46.412 Run status group 0 (all jobs): 00:10:46.412 READ: bw=20.0MiB/s (21.0MB/s), 4092KiB/s-8248KiB/s (4190kB/s-8446kB/s), io=20.1MiB (21.0MB), run=1001-1001msec 00:10:46.412 WRITE: bw=27.0MiB/s (28.3MB/s), 5351KiB/s-9.99MiB/s (5479kB/s-10.5MB/s), io=27.0MiB (28.3MB), run=1001-1001msec 00:10:46.412 00:10:46.412 Disk stats (read/write): 00:10:46.412 nvme0n1: ios=886/1024, merge=0/0, ticks=427/384, in_queue=811, util=87.27% 00:10:46.412 nvme0n2: ios=1029/1024, merge=0/0, ticks=485/325, in_queue=810, util=89.06% 00:10:46.412 nvme0n3: ios=991/1024, merge=0/0, ticks=449/315, in_queue=764, util=89.14% 00:10:46.412 nvme0n4: ios=1975/2048, merge=0/0, ticks=471/347, in_queue=818, util=89.80% 00:10:46.412 18:28:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:46.412 [global] 00:10:46.412 thread=1 00:10:46.412 invalidate=1 00:10:46.412 rw=randwrite 00:10:46.412 time_based=1 00:10:46.412 runtime=1 00:10:46.412 ioengine=libaio 00:10:46.412 direct=1 00:10:46.412 bs=4096 00:10:46.412 iodepth=1 00:10:46.412 norandommap=0 00:10:46.412 numjobs=1 00:10:46.412 00:10:46.412 verify_dump=1 00:10:46.412 verify_backlog=512 00:10:46.412 verify_state_save=0 00:10:46.412 do_verify=1 00:10:46.412 verify=crc32c-intel 00:10:46.412 [job0] 00:10:46.412 filename=/dev/nvme0n1 00:10:46.412 [job1] 00:10:46.412 filename=/dev/nvme0n2 00:10:46.412 [job2] 00:10:46.412 filename=/dev/nvme0n3 00:10:46.412 [job3] 00:10:46.412 filename=/dev/nvme0n4 00:10:46.412 Could not set queue depth (nvme0n1) 00:10:46.412 Could not set queue depth (nvme0n2) 00:10:46.412 Could not set queue depth (nvme0n3) 00:10:46.413 Could not set queue depth (nvme0n4) 00:10:46.413 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.413 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.413 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.413 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.413 fio-3.35 00:10:46.413 Starting 4 threads 00:10:47.891 00:10:47.891 job0: (groupid=0, jobs=1): err= 0: pid=78414: Sun Dec 8 18:28:05 2024 00:10:47.891 read: IOPS=1993, BW=7972KiB/s (8163kB/s)(7980KiB/1001msec) 00:10:47.891 slat (nsec): min=11868, max=39434, avg=15957.44, stdev=2809.27 00:10:47.891 clat (usec): min=154, max=1771, avg=256.83, stdev=52.01 00:10:47.891 lat (usec): min=169, max=1786, avg=272.79, stdev=52.21 00:10:47.891 clat percentiles (usec): 00:10:47.891 | 1.00th=[ 165], 5.00th=[ 188], 10.00th=[ 202], 20.00th=[ 221], 00:10:47.891 | 30.00th=[ 237], 40.00th=[ 249], 50.00th=[ 260], 60.00th=[ 269], 00:10:47.891 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 318], 00:10:47.891 | 99.00th=[ 343], 99.50th=[ 347], 99.90th=[ 367], 99.95th=[ 1778], 00:10:47.891 | 99.99th=[ 1778] 00:10:47.891 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:47.891 slat (nsec): min=13517, max=85725, avg=23893.16, stdev=5085.32 00:10:47.891 clat (usec): min=101, max=740, avg=194.91, stdev=40.08 00:10:47.891 lat (usec): min=119, max=762, avg=218.81, stdev=40.62 00:10:47.891 clat percentiles (usec): 00:10:47.891 | 1.00th=[ 120], 5.00th=[ 137], 10.00th=[ 147], 20.00th=[ 163], 00:10:47.891 | 30.00th=[ 174], 40.00th=[ 186], 50.00th=[ 194], 60.00th=[ 204], 00:10:47.891 | 70.00th=[ 215], 80.00th=[ 225], 90.00th=[ 243], 95.00th=[ 253], 00:10:47.891 | 99.00th=[ 281], 99.50th=[ 293], 99.90th=[ 412], 99.95th=[ 717], 00:10:47.891 | 99.99th=[ 742] 00:10:47.891 bw ( KiB/s): min= 8407, max= 8407, per=25.68%, avg=8407.00, stdev= 0.00, samples=1 00:10:47.891 iops : min= 2101, max= 2101, avg=2101.00, stdev= 0.00, samples=1 00:10:47.891 lat (usec) : 250=67.99%, 500=31.93%, 750=0.05% 00:10:47.891 lat (msec) : 2=0.02% 00:10:47.891 cpu : usr=1.50%, sys=6.70%, ctx=4043, majf=0, minf=11 00:10:47.891 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:47.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.891 issued rwts: total=1995,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.891 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:47.891 job1: (groupid=0, jobs=1): err= 0: pid=78415: Sun Dec 8 18:28:05 2024 00:10:47.891 read: IOPS=2039, BW=8160KiB/s (8356kB/s)(8168KiB/1001msec) 00:10:47.891 slat (nsec): min=11644, max=36898, avg=15286.07, stdev=2459.88 00:10:47.891 clat (usec): min=140, max=484, avg=254.42, stdev=41.67 00:10:47.891 lat (usec): min=153, max=499, avg=269.71, stdev=41.78 00:10:47.891 clat percentiles (usec): 00:10:47.891 | 1.00th=[ 161], 5.00th=[ 182], 10.00th=[ 196], 20.00th=[ 217], 00:10:47.891 | 30.00th=[ 235], 40.00th=[ 247], 50.00th=[ 258], 60.00th=[ 269], 00:10:47.891 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 318], 00:10:47.891 | 99.00th=[ 343], 99.50th=[ 351], 99.90th=[ 379], 99.95th=[ 433], 00:10:47.891 | 99.99th=[ 486] 00:10:47.891 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:47.891 slat (nsec): min=14813, max=88453, avg=22861.38, stdev=4781.17 00:10:47.891 clat (usec): min=96, max=464, avg=192.87, stdev=37.96 00:10:47.891 lat (usec): min=114, max=485, avg=215.73, stdev=38.73 00:10:47.891 clat percentiles (usec): 00:10:47.891 | 1.00th=[ 115], 5.00th=[ 129], 10.00th=[ 143], 20.00th=[ 159], 00:10:47.891 | 30.00th=[ 174], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 202], 00:10:47.891 | 70.00th=[ 212], 80.00th=[ 225], 90.00th=[ 243], 95.00th=[ 253], 00:10:47.891 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 343], 99.95th=[ 347], 00:10:47.891 | 99.99th=[ 465] 00:10:47.891 bw ( KiB/s): min= 8654, max= 8654, per=26.44%, avg=8654.00, stdev= 0.00, samples=1 00:10:47.891 iops : min= 2163, max= 2163, avg=2163.00, stdev= 0.00, samples=1 00:10:47.891 lat (usec) : 100=0.02%, 250=68.58%, 500=31.39% 00:10:47.891 cpu : usr=1.60%, sys=6.30%, ctx=4090, majf=0, minf=9 00:10:47.891 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:47.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.891 issued rwts: total=2042,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.891 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:47.891 job2: (groupid=0, jobs=1): err= 0: pid=78416: Sun Dec 8 18:28:05 2024 00:10:47.891 read: IOPS=1878, BW=7512KiB/s (7693kB/s)(7520KiB/1001msec) 00:10:47.891 slat (nsec): min=12223, max=41591, avg=15225.48, stdev=2498.63 00:10:47.891 clat (usec): min=158, max=498, avg=266.14, stdev=42.25 00:10:47.891 lat (usec): min=170, max=511, avg=281.37, stdev=42.51 00:10:47.891 clat percentiles (usec): 00:10:47.891 | 1.00th=[ 180], 5.00th=[ 200], 10.00th=[ 212], 20.00th=[ 229], 00:10:47.891 | 30.00th=[ 241], 40.00th=[ 255], 50.00th=[ 269], 60.00th=[ 277], 00:10:47.891 | 70.00th=[ 289], 80.00th=[ 306], 90.00th=[ 322], 95.00th=[ 338], 00:10:47.891 | 99.00th=[ 363], 99.50th=[ 371], 99.90th=[ 416], 99.95th=[ 498], 00:10:47.891 | 99.99th=[ 498] 00:10:47.891 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:47.891 slat (nsec): min=15015, max=86280, avg=24187.78, stdev=5302.01 00:10:47.891 clat (usec): min=116, max=2116, avg=202.35, stdev=55.32 00:10:47.891 lat (usec): min=140, max=2138, avg=226.54, stdev=56.08 00:10:47.891 clat percentiles (usec): 00:10:47.891 | 1.00th=[ 139], 5.00th=[ 157], 10.00th=[ 165], 20.00th=[ 174], 00:10:47.891 | 30.00th=[ 180], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 204], 00:10:47.891 | 70.00th=[ 217], 80.00th=[ 231], 90.00th=[ 249], 95.00th=[ 265], 00:10:47.891 | 99.00th=[ 310], 99.50th=[ 338], 99.90th=[ 404], 99.95th=[ 465], 00:10:47.891 | 99.99th=[ 2114] 00:10:47.891 bw ( KiB/s): min= 8175, max= 8175, per=24.97%, avg=8175.00, stdev= 0.00, samples=1 00:10:47.891 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:10:47.891 lat (usec) : 250=65.12%, 500=34.85% 00:10:47.891 lat (msec) : 4=0.03% 00:10:47.891 cpu : usr=1.40%, sys=6.30%, ctx=3929, majf=0, minf=10 00:10:47.891 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:47.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.891 issued rwts: total=1880,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.891 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:47.891 job3: (groupid=0, jobs=1): err= 0: pid=78417: Sun Dec 8 18:28:05 2024 00:10:47.891 read: IOPS=1806, BW=7225KiB/s (7398kB/s)(7232KiB/1001msec) 00:10:47.891 slat (nsec): min=12225, max=43427, avg=15969.88, stdev=3281.93 00:10:47.891 clat (usec): min=161, max=2609, avg=272.49, stdev=104.39 00:10:47.891 lat (usec): min=177, max=2634, avg=288.46, stdev=104.81 00:10:47.891 clat percentiles (usec): 00:10:47.891 | 1.00th=[ 182], 5.00th=[ 200], 10.00th=[ 210], 20.00th=[ 229], 00:10:47.891 | 30.00th=[ 241], 40.00th=[ 255], 50.00th=[ 269], 60.00th=[ 277], 00:10:47.891 | 70.00th=[ 289], 80.00th=[ 306], 90.00th=[ 322], 95.00th=[ 338], 00:10:47.891 | 99.00th=[ 379], 99.50th=[ 652], 99.90th=[ 1926], 99.95th=[ 2606], 00:10:47.891 | 99.99th=[ 2606] 00:10:47.891 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:47.891 slat (nsec): min=15353, max=82707, avg=23982.95, stdev=4999.54 00:10:47.891 clat (usec): min=107, max=2185, avg=205.97, stdev=68.00 00:10:47.891 lat (usec): min=127, max=2206, avg=229.96, stdev=68.87 00:10:47.891 clat percentiles (usec): 00:10:47.891 | 1.00th=[ 127], 5.00th=[ 145], 10.00th=[ 155], 20.00th=[ 172], 00:10:47.891 | 30.00th=[ 182], 40.00th=[ 190], 50.00th=[ 200], 60.00th=[ 210], 00:10:47.891 | 70.00th=[ 223], 80.00th=[ 235], 90.00th=[ 258], 95.00th=[ 273], 00:10:47.891 | 99.00th=[ 351], 99.50th=[ 379], 99.90th=[ 848], 99.95th=[ 1303], 00:10:47.891 | 99.99th=[ 2180] 00:10:47.891 bw ( KiB/s): min= 8175, max= 8175, per=24.97%, avg=8175.00, stdev= 0.00, samples=1 00:10:47.891 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:10:47.891 lat (usec) : 250=63.74%, 500=35.87%, 750=0.08%, 1000=0.08% 00:10:47.891 lat (msec) : 2=0.18%, 4=0.05% 00:10:47.891 cpu : usr=1.40%, sys=6.50%, ctx=3862, majf=0, minf=15 00:10:47.891 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:47.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.891 issued rwts: total=1808,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.891 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:47.891 00:10:47.891 Run status group 0 (all jobs): 00:10:47.892 READ: bw=30.1MiB/s (31.6MB/s), 7225KiB/s-8160KiB/s (7398kB/s-8356kB/s), io=30.2MiB (31.6MB), run=1001-1001msec 00:10:47.892 WRITE: bw=32.0MiB/s (33.5MB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:10:47.892 00:10:47.892 Disk stats (read/write): 00:10:47.892 nvme0n1: ios=1586/1998, merge=0/0, ticks=429/403, in_queue=832, util=88.28% 00:10:47.892 nvme0n2: ios=1578/2048, merge=0/0, ticks=425/410, in_queue=835, util=88.66% 00:10:47.892 nvme0n3: ios=1536/1888, merge=0/0, ticks=414/397, in_queue=811, util=89.43% 00:10:47.892 nvme0n4: ios=1536/1785, merge=0/0, ticks=410/385, in_queue=795, util=89.39% 00:10:47.892 18:28:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:47.892 [global] 00:10:47.892 thread=1 00:10:47.892 invalidate=1 00:10:47.892 rw=write 00:10:47.892 time_based=1 00:10:47.892 runtime=1 00:10:47.892 ioengine=libaio 00:10:47.892 direct=1 00:10:47.892 bs=4096 00:10:47.892 iodepth=128 00:10:47.892 norandommap=0 00:10:47.892 numjobs=1 00:10:47.892 00:10:47.892 verify_dump=1 00:10:47.892 verify_backlog=512 00:10:47.892 verify_state_save=0 00:10:47.892 do_verify=1 00:10:47.892 verify=crc32c-intel 00:10:47.892 [job0] 00:10:47.892 filename=/dev/nvme0n1 00:10:47.892 [job1] 00:10:47.892 filename=/dev/nvme0n2 00:10:47.892 [job2] 00:10:47.892 filename=/dev/nvme0n3 00:10:47.892 [job3] 00:10:47.892 filename=/dev/nvme0n4 00:10:47.892 Could not set queue depth (nvme0n1) 00:10:47.892 Could not set queue depth (nvme0n2) 00:10:47.892 Could not set queue depth (nvme0n3) 00:10:47.892 Could not set queue depth (nvme0n4) 00:10:47.892 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.892 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.892 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.892 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.892 fio-3.35 00:10:47.892 Starting 4 threads 00:10:49.268 00:10:49.268 job0: (groupid=0, jobs=1): err= 0: pid=78472: Sun Dec 8 18:28:06 2024 00:10:49.268 read: IOPS=3376, BW=13.2MiB/s (13.8MB/s)(13.3MiB/1005msec) 00:10:49.268 slat (usec): min=6, max=4686, avg=143.91, stdev=714.65 00:10:49.268 clat (usec): min=614, max=21183, avg=18663.61, stdev=1907.63 00:10:49.268 lat (usec): min=5229, max=21198, avg=18807.52, stdev=1770.08 00:10:49.268 clat percentiles (usec): 00:10:49.268 | 1.00th=[10290], 5.00th=[15664], 10.00th=[17695], 20.00th=[18220], 00:10:49.268 | 30.00th=[18482], 40.00th=[18744], 50.00th=[19006], 60.00th=[19268], 00:10:49.268 | 70.00th=[19530], 80.00th=[19792], 90.00th=[20055], 95.00th=[20317], 00:10:49.268 | 99.00th=[20579], 99.50th=[20841], 99.90th=[21103], 99.95th=[21103], 00:10:49.268 | 99.99th=[21103] 00:10:49.268 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:10:49.268 slat (usec): min=11, max=4582, avg=135.70, stdev=629.23 00:10:49.268 clat (usec): min=12831, max=20203, avg=17717.93, stdev=1029.60 00:10:49.268 lat (usec): min=12915, max=20227, avg=17853.63, stdev=818.96 00:10:49.268 clat percentiles (usec): 00:10:49.268 | 1.00th=[13698], 5.00th=[16712], 10.00th=[16909], 20.00th=[17171], 00:10:49.268 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17433], 60.00th=[17695], 00:10:49.268 | 70.00th=[18220], 80.00th=[18482], 90.00th=[19006], 95.00th=[19268], 00:10:49.268 | 99.00th=[19792], 99.50th=[20055], 99.90th=[20055], 99.95th=[20317], 00:10:49.268 | 99.99th=[20317] 00:10:49.268 bw ( KiB/s): min=13736, max=14906, per=31.21%, avg=14321.00, stdev=827.31, samples=2 00:10:49.269 iops : min= 3434, max= 3726, avg=3580.00, stdev=206.48, samples=2 00:10:49.269 lat (usec) : 750=0.01% 00:10:49.269 lat (msec) : 10=0.46%, 20=93.08%, 50=6.45% 00:10:49.269 cpu : usr=3.09%, sys=10.46%, ctx=220, majf=0, minf=1 00:10:49.269 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:49.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:49.269 issued rwts: total=3393,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.269 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:49.269 job1: (groupid=0, jobs=1): err= 0: pid=78473: Sun Dec 8 18:28:06 2024 00:10:49.269 read: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec) 00:10:49.269 slat (usec): min=6, max=10202, avg=209.11, stdev=939.82 00:10:49.269 clat (usec): min=16805, max=50730, avg=27271.71, stdev=5626.61 00:10:49.269 lat (usec): min=16824, max=51546, avg=27480.82, stdev=5690.73 00:10:49.269 clat percentiles (usec): 00:10:49.269 | 1.00th=[16909], 5.00th=[20055], 10.00th=[21627], 20.00th=[22676], 00:10:49.269 | 30.00th=[23200], 40.00th=[24773], 50.00th=[26870], 60.00th=[28181], 00:10:49.269 | 70.00th=[29230], 80.00th=[31065], 90.00th=[35390], 95.00th=[38011], 00:10:49.269 | 99.00th=[45351], 99.50th=[46400], 99.90th=[50594], 99.95th=[50594], 00:10:49.269 | 99.99th=[50594] 00:10:49.269 write: IOPS=2486, BW=9944KiB/s (10.2MB/s)(9.79MiB/1008msec); 0 zone resets 00:10:49.269 slat (usec): min=14, max=10956, avg=220.98, stdev=945.65 00:10:49.269 clat (usec): min=5274, max=56800, avg=28171.94, stdev=10238.65 00:10:49.269 lat (usec): min=7421, max=56830, avg=28392.92, stdev=10325.35 00:10:49.269 clat percentiles (usec): 00:10:49.269 | 1.00th=[12780], 5.00th=[15926], 10.00th=[17171], 20.00th=[18744], 00:10:49.269 | 30.00th=[19268], 40.00th=[21890], 50.00th=[30540], 60.00th=[31327], 00:10:49.269 | 70.00th=[32375], 80.00th=[35390], 90.00th=[40633], 95.00th=[50594], 00:10:49.269 | 99.00th=[54789], 99.50th=[55837], 99.90th=[56886], 99.95th=[56886], 00:10:49.269 | 99.99th=[56886] 00:10:49.269 bw ( KiB/s): min= 7432, max=11592, per=20.73%, avg=9512.00, stdev=2941.56, samples=2 00:10:49.269 iops : min= 1858, max= 2898, avg=2378.00, stdev=735.39, samples=2 00:10:49.269 lat (msec) : 10=0.37%, 20=21.98%, 50=74.64%, 100=3.01% 00:10:49.269 cpu : usr=2.58%, sys=7.15%, ctx=262, majf=0, minf=3 00:10:49.269 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:10:49.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:49.269 issued rwts: total=2048,2506,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.269 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:49.269 job2: (groupid=0, jobs=1): err= 0: pid=78474: Sun Dec 8 18:28:06 2024 00:10:49.269 read: IOPS=1523, BW=6095KiB/s (6242kB/s)(6144KiB/1008msec) 00:10:49.269 slat (usec): min=6, max=10026, avg=239.55, stdev=1001.01 00:10:49.269 clat (usec): min=19416, max=64500, avg=28781.08, stdev=5890.45 00:10:49.269 lat (usec): min=19453, max=64515, avg=29020.64, stdev=5973.53 00:10:49.269 clat percentiles (usec): 00:10:49.269 | 1.00th=[19792], 5.00th=[21890], 10.00th=[23200], 20.00th=[23987], 00:10:49.269 | 30.00th=[25297], 40.00th=[26346], 50.00th=[28181], 60.00th=[28705], 00:10:49.269 | 70.00th=[30540], 80.00th=[31851], 90.00th=[35390], 95.00th=[40109], 00:10:49.269 | 99.00th=[50070], 99.50th=[58459], 99.90th=[64750], 99.95th=[64750], 00:10:49.269 | 99.99th=[64750] 00:10:49.269 write: IOPS=1874, BW=7496KiB/s (7676kB/s)(7556KiB/1008msec); 0 zone resets 00:10:49.269 slat (usec): min=19, max=10945, avg=326.10, stdev=1169.72 00:10:49.269 clat (usec): min=7443, max=89629, avg=43541.00, stdev=18855.66 00:10:49.269 lat (usec): min=7467, max=89657, avg=43867.10, stdev=18979.66 00:10:49.269 clat percentiles (usec): 00:10:49.269 | 1.00th=[10814], 5.00th=[25035], 10.00th=[29230], 20.00th=[30540], 00:10:49.269 | 30.00th=[31327], 40.00th=[32113], 50.00th=[34341], 60.00th=[35914], 00:10:49.269 | 70.00th=[51119], 80.00th=[61604], 90.00th=[77071], 95.00th=[83362], 00:10:49.269 | 99.00th=[87557], 99.50th=[87557], 99.90th=[89654], 99.95th=[89654], 00:10:49.269 | 99.99th=[89654] 00:10:49.269 bw ( KiB/s): min= 6672, max= 7424, per=15.36%, avg=7048.00, stdev=531.74, samples=2 00:10:49.269 iops : min= 1668, max= 1856, avg=1762.00, stdev=132.94, samples=2 00:10:49.269 lat (msec) : 10=0.47%, 20=1.72%, 50=80.06%, 100=17.75% 00:10:49.269 cpu : usr=1.79%, sys=6.16%, ctx=250, majf=0, minf=8 00:10:49.269 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:10:49.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:49.269 issued rwts: total=1536,1889,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.269 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:49.269 job3: (groupid=0, jobs=1): err= 0: pid=78475: Sun Dec 8 18:28:06 2024 00:10:49.269 read: IOPS=3471, BW=13.6MiB/s (14.2MB/s)(13.6MiB/1005msec) 00:10:49.269 slat (usec): min=6, max=4685, avg=140.62, stdev=692.33 00:10:49.269 clat (usec): min=611, max=21046, avg=18265.76, stdev=1985.40 00:10:49.269 lat (usec): min=5143, max=21060, avg=18406.37, stdev=1862.54 00:10:49.269 clat percentiles (usec): 00:10:49.269 | 1.00th=[10028], 5.00th=[15533], 10.00th=[16450], 20.00th=[17433], 00:10:49.269 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18744], 60.00th=[19006], 00:10:49.269 | 70.00th=[19268], 80.00th=[19530], 90.00th=[20055], 95.00th=[20055], 00:10:49.269 | 99.00th=[20841], 99.50th=[21103], 99.90th=[21103], 99.95th=[21103], 00:10:49.269 | 99.99th=[21103] 00:10:49.269 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:10:49.269 slat (usec): min=13, max=4841, avg=134.72, stdev=619.69 00:10:49.269 clat (usec): min=12331, max=19614, avg=17555.11, stdev=1128.28 00:10:49.269 lat (usec): min=13853, max=19639, avg=17689.82, stdev=950.79 00:10:49.269 clat percentiles (usec): 00:10:49.269 | 1.00th=[13698], 5.00th=[15664], 10.00th=[16188], 20.00th=[16712], 00:10:49.269 | 30.00th=[17171], 40.00th=[17171], 50.00th=[17433], 60.00th=[17957], 00:10:49.269 | 70.00th=[18220], 80.00th=[18744], 90.00th=[19006], 95.00th=[19006], 00:10:49.269 | 99.00th=[19530], 99.50th=[19530], 99.90th=[19530], 99.95th=[19530], 00:10:49.269 | 99.99th=[19530] 00:10:49.269 bw ( KiB/s): min=13304, max=15368, per=31.24%, avg=14336.00, stdev=1459.47, samples=2 00:10:49.269 iops : min= 3326, max= 3842, avg=3584.00, stdev=364.87, samples=2 00:10:49.269 lat (usec) : 750=0.01% 00:10:49.269 lat (msec) : 10=0.47%, 20=95.59%, 50=3.93% 00:10:49.269 cpu : usr=3.88%, sys=10.26%, ctx=223, majf=0, minf=1 00:10:49.269 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:49.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:49.269 issued rwts: total=3489,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.269 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:49.269 00:10:49.269 Run status group 0 (all jobs): 00:10:49.269 READ: bw=40.6MiB/s (42.5MB/s), 6095KiB/s-13.6MiB/s (6242kB/s-14.2MB/s), io=40.9MiB (42.9MB), run=1005-1008msec 00:10:49.269 WRITE: bw=44.8MiB/s (47.0MB/s), 7496KiB/s-13.9MiB/s (7676kB/s-14.6MB/s), io=45.2MiB (47.4MB), run=1005-1008msec 00:10:49.269 00:10:49.269 Disk stats (read/write): 00:10:49.269 nvme0n1: ios=2962/3072, merge=0/0, ticks=12728/12145, in_queue=24873, util=87.46% 00:10:49.269 nvme0n2: ios=2014/2048, merge=0/0, ticks=17153/16793, in_queue=33946, util=88.74% 00:10:49.269 nvme0n3: ios=1192/1536, merge=0/0, ticks=11856/22955, in_queue=34811, util=89.10% 00:10:49.269 nvme0n4: ios=3008/3072, merge=0/0, ticks=12710/11903, in_queue=24613, util=89.66% 00:10:49.269 18:28:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:49.269 [global] 00:10:49.269 thread=1 00:10:49.269 invalidate=1 00:10:49.269 rw=randwrite 00:10:49.269 time_based=1 00:10:49.269 runtime=1 00:10:49.269 ioengine=libaio 00:10:49.269 direct=1 00:10:49.269 bs=4096 00:10:49.269 iodepth=128 00:10:49.269 norandommap=0 00:10:49.269 numjobs=1 00:10:49.269 00:10:49.269 verify_dump=1 00:10:49.269 verify_backlog=512 00:10:49.269 verify_state_save=0 00:10:49.269 do_verify=1 00:10:49.269 verify=crc32c-intel 00:10:49.269 [job0] 00:10:49.269 filename=/dev/nvme0n1 00:10:49.269 [job1] 00:10:49.269 filename=/dev/nvme0n2 00:10:49.269 [job2] 00:10:49.269 filename=/dev/nvme0n3 00:10:49.269 [job3] 00:10:49.269 filename=/dev/nvme0n4 00:10:49.269 Could not set queue depth (nvme0n1) 00:10:49.269 Could not set queue depth (nvme0n2) 00:10:49.269 Could not set queue depth (nvme0n3) 00:10:49.269 Could not set queue depth (nvme0n4) 00:10:49.269 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:49.269 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:49.269 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:49.269 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:49.269 fio-3.35 00:10:49.269 Starting 4 threads 00:10:50.646 00:10:50.646 job0: (groupid=0, jobs=1): err= 0: pid=78534: Sun Dec 8 18:28:08 2024 00:10:50.646 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:10:50.646 slat (usec): min=8, max=6676, avg=118.74, stdev=537.01 00:10:50.646 clat (usec): min=9790, max=21446, avg=15240.32, stdev=1542.94 00:10:50.646 lat (usec): min=9842, max=22038, avg=15359.06, stdev=1554.63 00:10:50.646 clat percentiles (usec): 00:10:50.646 | 1.00th=[11076], 5.00th=[12780], 10.00th=[13435], 20.00th=[14353], 00:10:50.646 | 30.00th=[14746], 40.00th=[14877], 50.00th=[15139], 60.00th=[15533], 00:10:50.646 | 70.00th=[15926], 80.00th=[16188], 90.00th=[16712], 95.00th=[17957], 00:10:50.646 | 99.00th=[19792], 99.50th=[20579], 99.90th=[20841], 99.95th=[21103], 00:10:50.646 | 99.99th=[21365] 00:10:50.646 write: IOPS=4328, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1007msec); 0 zone resets 00:10:50.646 slat (usec): min=11, max=7665, avg=109.90, stdev=630.18 00:10:50.646 clat (usec): min=6162, max=24165, avg=14873.32, stdev=1871.39 00:10:50.646 lat (usec): min=6924, max=24182, avg=14983.23, stdev=1957.96 00:10:50.646 clat percentiles (usec): 00:10:50.646 | 1.00th=[ 9896], 5.00th=[12125], 10.00th=[12911], 20.00th=[13829], 00:10:50.646 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14746], 60.00th=[15008], 00:10:50.646 | 70.00th=[15401], 80.00th=[15926], 90.00th=[16909], 95.00th=[18220], 00:10:50.646 | 99.00th=[21103], 99.50th=[21890], 99.90th=[23987], 99.95th=[24249], 00:10:50.646 | 99.99th=[24249] 00:10:50.646 bw ( KiB/s): min=16704, max=17152, per=36.74%, avg=16928.00, stdev=316.78, samples=2 00:10:50.646 iops : min= 4176, max= 4288, avg=4232.00, stdev=79.20, samples=2 00:10:50.646 lat (msec) : 10=0.72%, 20=97.82%, 50=1.45% 00:10:50.647 cpu : usr=3.48%, sys=12.52%, ctx=328, majf=0, minf=9 00:10:50.647 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:50.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:50.647 issued rwts: total=4096,4359,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.647 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:50.647 job1: (groupid=0, jobs=1): err= 0: pid=78536: Sun Dec 8 18:28:08 2024 00:10:50.647 read: IOPS=1516, BW=6065KiB/s (6211kB/s)(6144KiB/1013msec) 00:10:50.647 slat (usec): min=6, max=20241, avg=336.12, stdev=1299.30 00:10:50.647 clat (usec): min=24352, max=61682, avg=42294.34, stdev=6816.83 00:10:50.647 lat (usec): min=24858, max=61704, avg=42630.46, stdev=6859.25 00:10:50.647 clat percentiles (usec): 00:10:50.647 | 1.00th=[28705], 5.00th=[32375], 10.00th=[34341], 20.00th=[37487], 00:10:50.647 | 30.00th=[39584], 40.00th=[40109], 50.00th=[40633], 60.00th=[41681], 00:10:50.647 | 70.00th=[43254], 80.00th=[49546], 90.00th=[52691], 95.00th=[54789], 00:10:50.647 | 99.00th=[57410], 99.50th=[58459], 99.90th=[59507], 99.95th=[61604], 00:10:50.647 | 99.99th=[61604] 00:10:50.647 write: IOPS=1787, BW=7151KiB/s (7323kB/s)(7244KiB/1013msec); 0 zone resets 00:10:50.647 slat (usec): min=5, max=15192, avg=259.74, stdev=1100.20 00:10:50.647 clat (usec): min=10434, max=48579, avg=34992.24, stdev=8474.08 00:10:50.647 lat (usec): min=10865, max=50554, avg=35251.98, stdev=8459.52 00:10:50.647 clat percentiles (usec): 00:10:50.647 | 1.00th=[12387], 5.00th=[18220], 10.00th=[22676], 20.00th=[27919], 00:10:50.647 | 30.00th=[30278], 40.00th=[33817], 50.00th=[38536], 60.00th=[39584], 00:10:50.647 | 70.00th=[41157], 80.00th=[42206], 90.00th=[43779], 95.00th=[45351], 00:10:50.647 | 99.00th=[47449], 99.50th=[47449], 99.90th=[47973], 99.95th=[48497], 00:10:50.647 | 99.99th=[48497] 00:10:50.647 bw ( KiB/s): min= 5272, max= 8192, per=14.61%, avg=6732.00, stdev=2064.75, samples=2 00:10:50.647 iops : min= 1318, max= 2048, avg=1683.00, stdev=516.19, samples=2 00:10:50.647 lat (msec) : 20=3.76%, 50=87.81%, 100=8.43% 00:10:50.647 cpu : usr=2.17%, sys=5.04%, ctx=523, majf=0, minf=7 00:10:50.647 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:10:50.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:50.647 issued rwts: total=1536,1811,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.647 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:50.647 job2: (groupid=0, jobs=1): err= 0: pid=78540: Sun Dec 8 18:28:08 2024 00:10:50.647 read: IOPS=1522, BW=6089KiB/s (6235kB/s)(6144KiB/1009msec) 00:10:50.647 slat (usec): min=8, max=14353, avg=335.46, stdev=1340.42 00:10:50.647 clat (usec): min=21164, max=57854, avg=40211.65, stdev=6020.10 00:10:50.647 lat (usec): min=21176, max=57873, avg=40547.12, stdev=6040.45 00:10:50.647 clat percentiles (usec): 00:10:50.647 | 1.00th=[26608], 5.00th=[28967], 10.00th=[33817], 20.00th=[35914], 00:10:50.647 | 30.00th=[38536], 40.00th=[39060], 50.00th=[40109], 60.00th=[40633], 00:10:50.647 | 70.00th=[41681], 80.00th=[43779], 90.00th=[49021], 95.00th=[53216], 00:10:50.647 | 99.00th=[55837], 99.50th=[57410], 99.90th=[57410], 99.95th=[57934], 00:10:50.647 | 99.99th=[57934] 00:10:50.647 write: IOPS=1580, BW=6323KiB/s (6475kB/s)(6380KiB/1009msec); 0 zone resets 00:10:50.647 slat (usec): min=5, max=14428, avg=300.18, stdev=1156.17 00:10:50.647 clat (usec): min=534, max=57288, avg=39586.83, stdev=7527.48 00:10:50.647 lat (usec): min=13739, max=57778, avg=39887.00, stdev=7593.37 00:10:50.647 clat percentiles (usec): 00:10:50.647 | 1.00th=[18482], 5.00th=[22938], 10.00th=[28443], 20.00th=[37487], 00:10:50.647 | 30.00th=[39060], 40.00th=[39584], 50.00th=[40109], 60.00th=[41157], 00:10:50.647 | 70.00th=[42206], 80.00th=[42206], 90.00th=[49021], 95.00th=[52167], 00:10:50.647 | 99.00th=[55313], 99.50th=[55837], 99.90th=[57410], 99.95th=[57410], 00:10:50.647 | 99.99th=[57410] 00:10:50.647 bw ( KiB/s): min= 4736, max= 7552, per=13.34%, avg=6144.00, stdev=1991.21, samples=2 00:10:50.647 iops : min= 1184, max= 1888, avg=1536.00, stdev=497.80, samples=2 00:10:50.647 lat (usec) : 750=0.03% 00:10:50.647 lat (msec) : 20=1.72%, 50=89.72%, 100=8.53% 00:10:50.647 cpu : usr=1.98%, sys=4.56%, ctx=525, majf=0, minf=15 00:10:50.647 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:10:50.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:50.647 issued rwts: total=1536,1595,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.647 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:50.647 job3: (groupid=0, jobs=1): err= 0: pid=78541: Sun Dec 8 18:28:08 2024 00:10:50.647 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:10:50.647 slat (usec): min=7, max=9885, avg=134.96, stdev=657.82 00:10:50.647 clat (usec): min=10069, max=25979, avg=17142.99, stdev=2006.02 00:10:50.647 lat (usec): min=10428, max=26044, avg=17277.95, stdev=2028.18 00:10:50.647 clat percentiles (usec): 00:10:50.647 | 1.00th=[11207], 5.00th=[13566], 10.00th=[15008], 20.00th=[16057], 00:10:50.647 | 30.00th=[16450], 40.00th=[16909], 50.00th=[17433], 60.00th=[17695], 00:10:50.647 | 70.00th=[17957], 80.00th=[18220], 90.00th=[19006], 95.00th=[20579], 00:10:50.647 | 99.00th=[22938], 99.50th=[23725], 99.90th=[25297], 99.95th=[25560], 00:10:50.647 | 99.99th=[26084] 00:10:50.647 write: IOPS=3872, BW=15.1MiB/s (15.9MB/s)(15.2MiB/1008msec); 0 zone resets 00:10:50.647 slat (usec): min=12, max=7517, avg=123.90, stdev=660.83 00:10:50.647 clat (usec): min=7079, max=25346, avg=16883.96, stdev=2205.13 00:10:50.647 lat (usec): min=7108, max=25366, avg=17007.86, stdev=2286.71 00:10:50.647 clat percentiles (usec): 00:10:50.647 | 1.00th=[10159], 5.00th=[12911], 10.00th=[15008], 20.00th=[15664], 00:10:50.647 | 30.00th=[16188], 40.00th=[16450], 50.00th=[16909], 60.00th=[17433], 00:10:50.647 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18482], 95.00th=[20317], 00:10:50.647 | 99.00th=[23987], 99.50th=[24511], 99.90th=[25297], 99.95th=[25297], 00:10:50.647 | 99.99th=[25297] 00:10:50.647 bw ( KiB/s): min=13824, max=16384, per=32.78%, avg=15104.00, stdev=1810.19, samples=2 00:10:50.647 iops : min= 3456, max= 4096, avg=3776.00, stdev=452.55, samples=2 00:10:50.647 lat (msec) : 10=0.49%, 20=92.83%, 50=6.68% 00:10:50.647 cpu : usr=3.48%, sys=11.42%, ctx=352, majf=0, minf=17 00:10:50.647 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:50.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:50.647 issued rwts: total=3584,3903,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.647 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:50.647 00:10:50.647 Run status group 0 (all jobs): 00:10:50.647 READ: bw=41.5MiB/s (43.5MB/s), 6065KiB/s-15.9MiB/s (6211kB/s-16.7MB/s), io=42.0MiB (44.0MB), run=1007-1013msec 00:10:50.647 WRITE: bw=45.0MiB/s (47.2MB/s), 6323KiB/s-16.9MiB/s (6475kB/s-17.7MB/s), io=45.6MiB (47.8MB), run=1007-1013msec 00:10:50.647 00:10:50.647 Disk stats (read/write): 00:10:50.647 nvme0n1: ios=3634/3686, merge=0/0, ticks=26406/23390, in_queue=49796, util=87.68% 00:10:50.647 nvme0n2: ios=1269/1536, merge=0/0, ticks=26176/26764, in_queue=52940, util=88.78% 00:10:50.647 nvme0n3: ios=1050/1536, merge=0/0, ticks=21940/28774, in_queue=50714, util=86.63% 00:10:50.647 nvme0n4: ios=3078/3311, merge=0/0, ticks=26059/23953, in_queue=50012, util=89.78% 00:10:50.647 18:28:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:50.647 18:28:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=78555 00:10:50.647 18:28:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:50.647 18:28:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:50.647 [global] 00:10:50.647 thread=1 00:10:50.647 invalidate=1 00:10:50.647 rw=read 00:10:50.647 time_based=1 00:10:50.647 runtime=10 00:10:50.647 ioengine=libaio 00:10:50.647 direct=1 00:10:50.647 bs=4096 00:10:50.647 iodepth=1 00:10:50.647 norandommap=1 00:10:50.647 numjobs=1 00:10:50.647 00:10:50.647 [job0] 00:10:50.647 filename=/dev/nvme0n1 00:10:50.647 [job1] 00:10:50.647 filename=/dev/nvme0n2 00:10:50.647 [job2] 00:10:50.647 filename=/dev/nvme0n3 00:10:50.647 [job3] 00:10:50.647 filename=/dev/nvme0n4 00:10:50.647 Could not set queue depth (nvme0n1) 00:10:50.647 Could not set queue depth (nvme0n2) 00:10:50.647 Could not set queue depth (nvme0n3) 00:10:50.647 Could not set queue depth (nvme0n4) 00:10:50.647 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.647 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.647 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.647 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.647 fio-3.35 00:10:50.647 Starting 4 threads 00:10:53.934 18:28:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:53.934 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=46829568, buflen=4096 00:10:53.934 fio: pid=78598, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:53.934 18:28:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:53.934 fio: pid=78597, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:53.934 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=51138560, buflen=4096 00:10:53.934 18:28:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:53.934 18:28:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:54.193 fio: pid=78595, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:54.193 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=56549376, buflen=4096 00:10:54.193 18:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:54.193 18:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:54.453 fio: pid=78596, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:54.454 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=62963712, buflen=4096 00:10:54.454 00:10:54.454 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=78595: Sun Dec 8 18:28:12 2024 00:10:54.454 read: IOPS=3993, BW=15.6MiB/s (16.4MB/s)(53.9MiB/3457msec) 00:10:54.454 slat (usec): min=11, max=11731, avg=16.75, stdev=152.99 00:10:54.454 clat (usec): min=66, max=5167, avg=232.41, stdev=59.06 00:10:54.454 lat (usec): min=150, max=11954, avg=249.16, stdev=164.05 00:10:54.454 clat percentiles (usec): 00:10:54.454 | 1.00th=[ 155], 5.00th=[ 176], 10.00th=[ 188], 20.00th=[ 204], 00:10:54.454 | 30.00th=[ 217], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 241], 00:10:54.454 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 281], 00:10:54.454 | 99.00th=[ 314], 99.50th=[ 338], 99.90th=[ 545], 99.95th=[ 660], 00:10:54.454 | 99.99th=[ 1958] 00:10:54.454 bw ( KiB/s): min=15192, max=16152, per=27.86%, avg=15793.33, stdev=363.92, samples=6 00:10:54.454 iops : min= 3798, max= 4038, avg=3948.33, stdev=90.98, samples=6 00:10:54.454 lat (usec) : 100=0.01%, 250=71.19%, 500=28.67%, 750=0.09% 00:10:54.454 lat (msec) : 2=0.03%, 10=0.01% 00:10:54.454 cpu : usr=1.04%, sys=4.72%, ctx=13829, majf=0, minf=1 00:10:54.454 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.454 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.454 issued rwts: total=13807,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.454 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.454 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=78596: Sun Dec 8 18:28:12 2024 00:10:54.454 read: IOPS=4102, BW=16.0MiB/s (16.8MB/s)(60.0MiB/3747msec) 00:10:54.454 slat (usec): min=12, max=11005, avg=17.45, stdev=175.60 00:10:54.454 clat (usec): min=50, max=2063, avg=225.14, stdev=42.70 00:10:54.454 lat (usec): min=147, max=11204, avg=242.60, stdev=180.60 00:10:54.454 clat percentiles (usec): 00:10:54.454 | 1.00th=[ 151], 5.00th=[ 165], 10.00th=[ 176], 20.00th=[ 192], 00:10:54.454 | 30.00th=[ 206], 40.00th=[ 219], 50.00th=[ 229], 60.00th=[ 239], 00:10:54.454 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 277], 00:10:54.454 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 359], 99.95th=[ 510], 00:10:54.454 | 99.99th=[ 1876] 00:10:54.454 bw ( KiB/s): min=15408, max=18212, per=28.61%, avg=16214.29, stdev=931.57, samples=7 00:10:54.454 iops : min= 3852, max= 4553, avg=4053.57, stdev=232.89, samples=7 00:10:54.454 lat (usec) : 100=0.01%, 250=73.63%, 500=26.30%, 750=0.03% 00:10:54.454 lat (msec) : 2=0.02%, 4=0.01% 00:10:54.454 cpu : usr=0.85%, sys=5.05%, ctx=15384, majf=0, minf=2 00:10:54.454 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.454 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.454 issued rwts: total=15373,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.454 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.454 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=78597: Sun Dec 8 18:28:12 2024 00:10:54.454 read: IOPS=3893, BW=15.2MiB/s (15.9MB/s)(48.8MiB/3207msec) 00:10:54.454 slat (usec): min=10, max=9760, avg=14.12, stdev=119.03 00:10:54.454 clat (usec): min=168, max=1966, avg=241.48, stdev=34.97 00:10:54.454 lat (usec): min=185, max=10067, avg=255.60, stdev=124.70 00:10:54.454 clat percentiles (usec): 00:10:54.454 | 1.00th=[ 196], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 223], 00:10:54.454 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 245], 00:10:54.454 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 285], 00:10:54.454 | 99.00th=[ 318], 99.50th=[ 326], 99.90th=[ 383], 99.95th=[ 619], 00:10:54.454 | 99.99th=[ 1745] 00:10:54.454 bw ( KiB/s): min=15192, max=16032, per=27.70%, avg=15702.67, stdev=325.29, samples=6 00:10:54.454 iops : min= 3798, max= 4008, avg=3925.67, stdev=81.32, samples=6 00:10:54.454 lat (usec) : 250=69.60%, 500=30.31%, 750=0.06%, 1000=0.01% 00:10:54.454 lat (msec) : 2=0.02% 00:10:54.454 cpu : usr=0.81%, sys=4.65%, ctx=12488, majf=0, minf=2 00:10:54.454 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.454 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.454 issued rwts: total=12486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.454 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.454 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=78598: Sun Dec 8 18:28:12 2024 00:10:54.454 read: IOPS=3879, BW=15.2MiB/s (15.9MB/s)(44.7MiB/2947msec) 00:10:54.454 slat (nsec): min=12380, max=76707, avg=15103.25, stdev=2701.54 00:10:54.454 clat (usec): min=150, max=6427, avg=241.38, stdev=120.63 00:10:54.454 lat (usec): min=180, max=6441, avg=256.48, stdev=120.86 00:10:54.454 clat percentiles (usec): 00:10:54.454 | 1.00th=[ 184], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 215], 00:10:54.454 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 243], 00:10:54.454 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 281], 00:10:54.454 | 99.00th=[ 318], 99.50th=[ 351], 99.90th=[ 2114], 99.95th=[ 2769], 00:10:54.454 | 99.99th=[ 6194] 00:10:54.454 bw ( KiB/s): min=15032, max=16088, per=27.48%, avg=15577.60, stdev=441.87, samples=5 00:10:54.454 iops : min= 3758, max= 4022, avg=3894.40, stdev=110.47, samples=5 00:10:54.454 lat (usec) : 250=70.18%, 500=29.58%, 750=0.06%, 1000=0.03% 00:10:54.454 lat (msec) : 2=0.03%, 4=0.08%, 10=0.03% 00:10:54.454 cpu : usr=1.09%, sys=4.89%, ctx=11435, majf=0, minf=2 00:10:54.454 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.454 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.454 issued rwts: total=11434,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.454 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.454 00:10:54.454 Run status group 0 (all jobs): 00:10:54.454 READ: bw=55.4MiB/s (58.0MB/s), 15.2MiB/s-16.0MiB/s (15.9MB/s-16.8MB/s), io=207MiB (217MB), run=2947-3747msec 00:10:54.454 00:10:54.454 Disk stats (read/write): 00:10:54.454 nvme0n1: ios=13350/0, merge=0/0, ticks=3192/0, in_queue=3192, util=95.51% 00:10:54.454 nvme0n2: ios=14702/0, merge=0/0, ticks=3413/0, in_queue=3413, util=95.58% 00:10:54.454 nvme0n3: ios=12152/0, merge=0/0, ticks=2943/0, in_queue=2943, util=96.30% 00:10:54.454 nvme0n4: ios=11146/0, merge=0/0, ticks=2717/0, in_queue=2717, util=96.42% 00:10:54.454 18:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:54.454 18:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:54.714 18:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:54.972 18:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:55.231 18:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:55.231 18:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:55.490 18:28:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:55.490 18:28:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:55.749 18:28:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:55.749 18:28:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:56.008 18:28:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:56.008 18:28:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 78555 00:10:56.008 18:28:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:56.008 18:28:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:56.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.008 18:28:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:56.008 18:28:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:56.008 18:28:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:56.008 18:28:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:56.008 18:28:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:56.008 18:28:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:56.008 18:28:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:56.008 18:28:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:56.008 nvmf hotplug test: fio failed as expected 00:10:56.008 18:28:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:56.008 18:28:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:56.267 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:56.267 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:56.267 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:56.267 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:56.267 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:56.267 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:56.267 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:56.267 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:56.267 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:56.267 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:56.267 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:56.267 rmmod nvme_tcp 00:10:56.267 rmmod nvme_fabrics 00:10:56.267 rmmod nvme_keyring 00:10:56.527 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:56.527 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:56.527 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:56.527 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 78163 ']' 00:10:56.527 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 78163 00:10:56.527 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 78163 ']' 00:10:56.527 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 78163 00:10:56.527 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:56.527 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:56.527 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78163 00:10:56.527 killing process with pid 78163 00:10:56.527 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:56.527 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:56.527 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78163' 00:10:56.527 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 78163 00:10:56.527 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 78163 00:10:56.787 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:56.787 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:56.787 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:56.787 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:56.787 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:10:56.787 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:56.787 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:10:56.787 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:56.788 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:56.788 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:56.788 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:56.788 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:56.788 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:56.788 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:56.788 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:56.788 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:56.788 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:56.788 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:56.788 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:56.788 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:56.788 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:57.047 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:57.047 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:57.047 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.047 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.047 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.047 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:10:57.047 00:10:57.047 real 0m20.763s 00:10:57.047 user 1m18.297s 00:10:57.047 sys 0m9.217s 00:10:57.047 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:57.047 ************************************ 00:10:57.047 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.047 END TEST nvmf_fio_target 00:10:57.047 ************************************ 00:10:57.047 18:28:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:57.047 18:28:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:57.047 18:28:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:57.047 18:28:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:57.047 ************************************ 00:10:57.047 START TEST nvmf_bdevio 00:10:57.047 ************************************ 00:10:57.047 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:57.047 * Looking for test storage... 00:10:57.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:57.047 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:57.047 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:10:57.047 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:57.312 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:57.312 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.312 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.312 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.312 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.312 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.312 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.312 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.312 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.313 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.313 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.313 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.313 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:57.313 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:57.313 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.313 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.313 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:57.313 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:57.313 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.313 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:57.313 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.313 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:57.313 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:57.313 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:57.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.313 --rc genhtml_branch_coverage=1 00:10:57.313 --rc genhtml_function_coverage=1 00:10:57.313 --rc genhtml_legend=1 00:10:57.313 --rc geninfo_all_blocks=1 00:10:57.313 --rc geninfo_unexecuted_blocks=1 00:10:57.313 00:10:57.313 ' 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:57.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.313 --rc genhtml_branch_coverage=1 00:10:57.313 --rc genhtml_function_coverage=1 00:10:57.313 --rc genhtml_legend=1 00:10:57.313 --rc geninfo_all_blocks=1 00:10:57.313 --rc geninfo_unexecuted_blocks=1 00:10:57.313 00:10:57.313 ' 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:57.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.313 --rc genhtml_branch_coverage=1 00:10:57.313 --rc genhtml_function_coverage=1 00:10:57.313 --rc genhtml_legend=1 00:10:57.313 --rc geninfo_all_blocks=1 00:10:57.313 --rc geninfo_unexecuted_blocks=1 00:10:57.313 00:10:57.313 ' 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:57.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.313 --rc genhtml_branch_coverage=1 00:10:57.313 --rc genhtml_function_coverage=1 00:10:57.313 --rc genhtml_legend=1 00:10:57.313 --rc geninfo_all_blocks=1 00:10:57.313 --rc geninfo_unexecuted_blocks=1 00:10:57.313 00:10:57.313 ' 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.313 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:57.314 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:57.314 Cannot find device "nvmf_init_br" 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:57.314 Cannot find device "nvmf_init_br2" 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:57.314 Cannot find device "nvmf_tgt_br" 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:57.314 Cannot find device "nvmf_tgt_br2" 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:57.314 Cannot find device "nvmf_init_br" 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:57.314 Cannot find device "nvmf_init_br2" 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:57.314 Cannot find device "nvmf_tgt_br" 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:57.314 Cannot find device "nvmf_tgt_br2" 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:57.314 Cannot find device "nvmf_br" 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:57.314 Cannot find device "nvmf_init_if" 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:57.314 Cannot find device "nvmf_init_if2" 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:57.314 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:57.314 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:57.314 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:57.574 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:57.574 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:10:57.574 00:10:57.574 --- 10.0.0.3 ping statistics --- 00:10:57.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.574 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:57.574 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:57.574 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.031 ms 00:10:57.574 00:10:57.574 --- 10.0.0.4 ping statistics --- 00:10:57.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.574 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:57.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:57.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:10:57.574 00:10:57.574 --- 10.0.0.1 ping statistics --- 00:10:57.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.574 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:57.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:57.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:10:57.574 00:10:57.574 --- 10.0.0.2 ping statistics --- 00:10:57.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.574 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # return 0 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:57.574 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:57.575 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:57.575 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:57.575 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:57.575 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:57.575 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:57.575 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=78913 00:10:57.575 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:57.575 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 78913 00:10:57.575 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 78913 ']' 00:10:57.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.575 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.575 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:57.575 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.575 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:57.575 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:57.833 [2024-12-08 18:28:15.541475] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:57.833 [2024-12-08 18:28:15.541733] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.833 [2024-12-08 18:28:15.682209] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:57.833 [2024-12-08 18:28:15.750621] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.833 [2024-12-08 18:28:15.750941] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.833 [2024-12-08 18:28:15.750975] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:57.833 [2024-12-08 18:28:15.750984] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:57.833 [2024-12-08 18:28:15.750990] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.833 [2024-12-08 18:28:15.751136] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:10:57.833 [2024-12-08 18:28:15.751280] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:10:57.833 [2024-12-08 18:28:15.752051] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:10:57.833 [2024-12-08 18:28:15.752081] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:58.092 [2024-12-08 18:28:15.805362] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:58.092 [2024-12-08 18:28:15.919884] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:58.092 Malloc0 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:58.092 [2024-12-08 18:28:15.975111] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:58.092 { 00:10:58.092 "params": { 00:10:58.092 "name": "Nvme$subsystem", 00:10:58.092 "trtype": "$TEST_TRANSPORT", 00:10:58.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:58.092 "adrfam": "ipv4", 00:10:58.092 "trsvcid": "$NVMF_PORT", 00:10:58.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:58.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:58.092 "hdgst": ${hdgst:-false}, 00:10:58.092 "ddgst": ${ddgst:-false} 00:10:58.092 }, 00:10:58.092 "method": "bdev_nvme_attach_controller" 00:10:58.092 } 00:10:58.092 EOF 00:10:58.092 )") 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:10:58.092 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:58.092 "params": { 00:10:58.092 "name": "Nvme1", 00:10:58.092 "trtype": "tcp", 00:10:58.092 "traddr": "10.0.0.3", 00:10:58.092 "adrfam": "ipv4", 00:10:58.092 "trsvcid": "4420", 00:10:58.092 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:58.092 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:58.092 "hdgst": false, 00:10:58.092 "ddgst": false 00:10:58.092 }, 00:10:58.092 "method": "bdev_nvme_attach_controller" 00:10:58.092 }' 00:10:58.351 [2024-12-08 18:28:16.024388] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:58.351 [2024-12-08 18:28:16.024518] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78941 ] 00:10:58.351 [2024-12-08 18:28:16.157359] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:58.351 [2024-12-08 18:28:16.253308] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.351 [2024-12-08 18:28:16.253464] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.351 [2024-12-08 18:28:16.253464] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:58.610 [2024-12-08 18:28:16.334235] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:58.610 I/O targets: 00:10:58.610 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:58.610 00:10:58.610 00:10:58.610 CUnit - A unit testing framework for C - Version 2.1-3 00:10:58.610 http://cunit.sourceforge.net/ 00:10:58.610 00:10:58.610 00:10:58.610 Suite: bdevio tests on: Nvme1n1 00:10:58.610 Test: blockdev write read block ...passed 00:10:58.610 Test: blockdev write zeroes read block ...passed 00:10:58.610 Test: blockdev write zeroes read no split ...passed 00:10:58.610 Test: blockdev write zeroes read split ...passed 00:10:58.610 Test: blockdev write zeroes read split partial ...passed 00:10:58.610 Test: blockdev reset ...[2024-12-08 18:28:16.498973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:58.610 [2024-12-08 18:28:16.499283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20590d0 (9): Bad file descriptor 00:10:58.610 [2024-12-08 18:28:16.511543] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:58.610 passed 00:10:58.610 Test: blockdev write read 8 blocks ...passed 00:10:58.610 Test: blockdev write read size > 128k ...passed 00:10:58.610 Test: blockdev write read invalid size ...passed 00:10:58.610 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:58.610 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:58.610 Test: blockdev write read max offset ...passed 00:10:58.610 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:58.610 Test: blockdev writev readv 8 blocks ...passed 00:10:58.610 Test: blockdev writev readv 30 x 1block ...passed 00:10:58.610 Test: blockdev writev readv block ...passed 00:10:58.610 Test: blockdev writev readv size > 128k ...passed 00:10:58.610 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:58.610 Test: blockdev comparev and writev ...[2024-12-08 18:28:16.523682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:58.610 [2024-12-08 18:28:16.523741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:58.610 [2024-12-08 18:28:16.523770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:58.610 [2024-12-08 18:28:16.523784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:58.610 [2024-12-08 18:28:16.524142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:58.610 [2024-12-08 18:28:16.524171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:58.610 [2024-12-08 18:28:16.524193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:58.610 [2024-12-08 18:28:16.524207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:58.611 [2024-12-08 18:28:16.524483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:58.611 [2024-12-08 18:28:16.524511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:58.611 [2024-12-08 18:28:16.524535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:58.611 [2024-12-08 18:28:16.524548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:58.611 [2024-12-08 18:28:16.524977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:58.611 [2024-12-08 18:28:16.525140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:58.611 [2024-12-08 18:28:16.525167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:58.611 [2024-12-08 18:28:16.525181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:58.611 passed 00:10:58.611 Test: blockdev nvme passthru rw ...passed 00:10:58.611 Test: blockdev nvme passthru vendor specific ...passed 00:10:58.611 Test: blockdev nvme admin passthru ...[2024-12-08 18:28:16.526388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:58.611 [2024-12-08 18:28:16.526543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:58.611 [2024-12-08 18:28:16.526737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:58.611 [2024-12-08 18:28:16.526757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:58.611 [2024-12-08 18:28:16.526879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:58.611 [2024-12-08 18:28:16.526898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:58.611 [2024-12-08 18:28:16.527005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:58.611 [2024-12-08 18:28:16.527024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:58.870 passed 00:10:58.870 Test: blockdev copy ...passed 00:10:58.870 00:10:58.870 Run Summary: Type Total Ran Passed Failed Inactive 00:10:58.870 suites 1 1 n/a 0 0 00:10:58.870 tests 23 23 23 0 0 00:10:58.870 asserts 152 152 152 0 n/a 00:10:58.870 00:10:58.870 Elapsed time = 0.149 seconds 00:10:59.129 18:28:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:59.129 18:28:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.129 18:28:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:59.129 18:28:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.129 18:28:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:59.129 18:28:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:59.129 18:28:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:59.129 18:28:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:59.129 18:28:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:59.129 18:28:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:59.129 18:28:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:59.129 18:28:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:59.129 rmmod nvme_tcp 00:10:59.129 rmmod nvme_fabrics 00:10:59.129 rmmod nvme_keyring 00:10:59.129 18:28:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:59.129 18:28:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:59.129 18:28:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:59.129 18:28:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 78913 ']' 00:10:59.129 18:28:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 78913 00:10:59.129 18:28:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 78913 ']' 00:10:59.129 18:28:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 78913 00:10:59.129 18:28:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:59.129 18:28:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:59.129 18:28:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78913 00:10:59.129 killing process with pid 78913 00:10:59.129 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:59.129 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:59.129 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78913' 00:10:59.129 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 78913 00:10:59.129 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 78913 00:10:59.388 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:59.388 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:59.388 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:59.388 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:59.388 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:10:59.388 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:59.388 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:10:59.388 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:59.388 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:59.388 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:59.388 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:59.388 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:59.388 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:59.388 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:59.388 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:59.646 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:59.646 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:59.646 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:59.646 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:59.646 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:59.646 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:59.646 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:59.646 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:59.646 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.646 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.646 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.646 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:10:59.646 00:10:59.646 real 0m2.658s 00:10:59.646 user 0m7.371s 00:10:59.646 sys 0m0.915s 00:10:59.646 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.646 ************************************ 00:10:59.646 END TEST nvmf_bdevio 00:10:59.646 ************************************ 00:10:59.646 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:59.646 18:28:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:59.646 ************************************ 00:10:59.646 END TEST nvmf_target_core 00:10:59.646 ************************************ 00:10:59.646 00:10:59.646 real 2m33.120s 00:10:59.646 user 6m39.628s 00:10:59.646 sys 0m52.508s 00:10:59.646 18:28:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.646 18:28:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:59.646 18:28:17 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:59.646 18:28:17 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:59.646 18:28:17 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.646 18:28:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:59.906 ************************************ 00:10:59.906 START TEST nvmf_target_extra 00:10:59.906 ************************************ 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:59.906 * Looking for test storage... 00:10:59.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:59.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.906 --rc genhtml_branch_coverage=1 00:10:59.906 --rc genhtml_function_coverage=1 00:10:59.906 --rc genhtml_legend=1 00:10:59.906 --rc geninfo_all_blocks=1 00:10:59.906 --rc geninfo_unexecuted_blocks=1 00:10:59.906 00:10:59.906 ' 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:59.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.906 --rc genhtml_branch_coverage=1 00:10:59.906 --rc genhtml_function_coverage=1 00:10:59.906 --rc genhtml_legend=1 00:10:59.906 --rc geninfo_all_blocks=1 00:10:59.906 --rc geninfo_unexecuted_blocks=1 00:10:59.906 00:10:59.906 ' 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:59.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.906 --rc genhtml_branch_coverage=1 00:10:59.906 --rc genhtml_function_coverage=1 00:10:59.906 --rc genhtml_legend=1 00:10:59.906 --rc geninfo_all_blocks=1 00:10:59.906 --rc geninfo_unexecuted_blocks=1 00:10:59.906 00:10:59.906 ' 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:59.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.906 --rc genhtml_branch_coverage=1 00:10:59.906 --rc genhtml_function_coverage=1 00:10:59.906 --rc genhtml_legend=1 00:10:59.906 --rc geninfo_all_blocks=1 00:10:59.906 --rc geninfo_unexecuted_blocks=1 00:10:59.906 00:10:59.906 ' 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:59.906 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:59.907 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:59.907 ************************************ 00:10:59.907 START TEST nvmf_auth_target 00:10:59.907 ************************************ 00:10:59.907 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:00.167 * Looking for test storage... 00:11:00.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:00.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.167 --rc genhtml_branch_coverage=1 00:11:00.167 --rc genhtml_function_coverage=1 00:11:00.167 --rc genhtml_legend=1 00:11:00.167 --rc geninfo_all_blocks=1 00:11:00.167 --rc geninfo_unexecuted_blocks=1 00:11:00.167 00:11:00.167 ' 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:00.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.167 --rc genhtml_branch_coverage=1 00:11:00.167 --rc genhtml_function_coverage=1 00:11:00.167 --rc genhtml_legend=1 00:11:00.167 --rc geninfo_all_blocks=1 00:11:00.167 --rc geninfo_unexecuted_blocks=1 00:11:00.167 00:11:00.167 ' 00:11:00.167 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:00.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.167 --rc genhtml_branch_coverage=1 00:11:00.168 --rc genhtml_function_coverage=1 00:11:00.168 --rc genhtml_legend=1 00:11:00.168 --rc geninfo_all_blocks=1 00:11:00.168 --rc geninfo_unexecuted_blocks=1 00:11:00.168 00:11:00.168 ' 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:00.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.168 --rc genhtml_branch_coverage=1 00:11:00.168 --rc genhtml_function_coverage=1 00:11:00.168 --rc genhtml_legend=1 00:11:00.168 --rc geninfo_all_blocks=1 00:11:00.168 --rc geninfo_unexecuted_blocks=1 00:11:00.168 00:11:00.168 ' 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.168 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:00.168 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:00.168 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:00.168 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:00.168 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:11:00.168 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:00.168 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.168 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:00.168 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:00.168 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:00.168 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.168 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.168 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.168 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:00.168 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:00.168 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:00.168 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:00.168 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:00.168 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:00.169 Cannot find device "nvmf_init_br" 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:00.169 Cannot find device "nvmf_init_br2" 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:00.169 Cannot find device "nvmf_tgt_br" 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:00.169 Cannot find device "nvmf_tgt_br2" 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:00.169 Cannot find device "nvmf_init_br" 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:00.169 Cannot find device "nvmf_init_br2" 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:00.169 Cannot find device "nvmf_tgt_br" 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:11:00.169 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:00.428 Cannot find device "nvmf_tgt_br2" 00:11:00.428 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:00.429 Cannot find device "nvmf_br" 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:00.429 Cannot find device "nvmf_init_if" 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:00.429 Cannot find device "nvmf_init_if2" 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:00.429 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:00.429 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:00.429 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:00.688 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:00.688 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:11:00.688 00:11:00.688 --- 10.0.0.3 ping statistics --- 00:11:00.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.688 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:00.688 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:00.688 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:11:00.688 00:11:00.688 --- 10.0.0.4 ping statistics --- 00:11:00.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.688 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:00.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:11:00.688 00:11:00.688 --- 10.0.0.1 ping statistics --- 00:11:00.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.688 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:00.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:11:00.688 00:11:00.688 --- 10.0.0.2 ping statistics --- 00:11:00.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.688 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # return 0 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=79238 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 79238 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 79238 ']' 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:00.688 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.626 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:01.626 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:11:01.626 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:01.626 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:01.626 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.626 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.626 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=79270 00:11:01.626 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:11:01.626 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:01.626 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:11:01.626 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:01.626 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:01.626 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:01.626 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:11:01.626 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:11:01.626 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:01.626 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=0b478aef55acac1a449af89e484bdb589b3e7709a40bf804 00:11:01.626 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:11:01.885 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.mDw 00:11:01.885 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 0b478aef55acac1a449af89e484bdb589b3e7709a40bf804 0 00:11:01.885 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 0b478aef55acac1a449af89e484bdb589b3e7709a40bf804 0 00:11:01.885 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:01.885 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:01.885 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=0b478aef55acac1a449af89e484bdb589b3e7709a40bf804 00:11:01.885 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.mDw 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.mDw 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.mDw 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=c98103710dc01378f6d7551ffbd4d01f99daa62f8ab7385f5da0498e379f1905 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.fdj 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key c98103710dc01378f6d7551ffbd4d01f99daa62f8ab7385f5da0498e379f1905 3 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 c98103710dc01378f6d7551ffbd4d01f99daa62f8ab7385f5da0498e379f1905 3 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=c98103710dc01378f6d7551ffbd4d01f99daa62f8ab7385f5da0498e379f1905 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.fdj 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.fdj 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.fdj 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=f4891972413f8c73b75f00d683e8c5df 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.4PR 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key f4891972413f8c73b75f00d683e8c5df 1 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 f4891972413f8c73b75f00d683e8c5df 1 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=f4891972413f8c73b75f00d683e8c5df 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.4PR 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.4PR 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.4PR 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=f0f9ba24331e10f8c56d5a18ba7d7d09d9d4f71a8bf0f875 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.Hzy 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key f0f9ba24331e10f8c56d5a18ba7d7d09d9d4f71a8bf0f875 2 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 f0f9ba24331e10f8c56d5a18ba7d7d09d9d4f71a8bf0f875 2 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=f0f9ba24331e10f8c56d5a18ba7d7d09d9d4f71a8bf0f875 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.Hzy 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.Hzy 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Hzy 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=493d3c4c68e51c50e9e942d8fc309589fc8d09f073d37da2 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.4Cs 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 493d3c4c68e51c50e9e942d8fc309589fc8d09f073d37da2 2 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 493d3c4c68e51c50e9e942d8fc309589fc8d09f073d37da2 2 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=493d3c4c68e51c50e9e942d8fc309589fc8d09f073d37da2 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:11:01.886 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:02.145 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.4Cs 00:11:02.145 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.4Cs 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.4Cs 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=5a471eac4e12c2f47c7c73c5b8a2f721 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.8tL 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 5a471eac4e12c2f47c7c73c5b8a2f721 1 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 5a471eac4e12c2f47c7c73c5b8a2f721 1 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=5a471eac4e12c2f47c7c73c5b8a2f721 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.8tL 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.8tL 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.8tL 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=bcfedee43faa851d0738c6287d19b87144ccae2a0415d6965a1d6389b9130204 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.VY8 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key bcfedee43faa851d0738c6287d19b87144ccae2a0415d6965a1d6389b9130204 3 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 bcfedee43faa851d0738c6287d19b87144ccae2a0415d6965a1d6389b9130204 3 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=bcfedee43faa851d0738c6287d19b87144ccae2a0415d6965a1d6389b9130204 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.VY8 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.VY8 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.VY8 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 79238 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 79238 ']' 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:02.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:02.146 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.405 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:02.405 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:11:02.405 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 79270 /var/tmp/host.sock 00:11:02.405 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 79270 ']' 00:11:02.405 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:11:02.405 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:02.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:02.405 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:02.405 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:02.405 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.676 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:02.676 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:11:02.676 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:11:02.676 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.676 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.676 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.676 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:02.676 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.mDw 00:11:02.676 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.676 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.676 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.676 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.mDw 00:11:02.676 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.mDw 00:11:02.935 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.fdj ]] 00:11:02.935 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fdj 00:11:02.935 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.935 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.935 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.935 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fdj 00:11:02.935 18:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fdj 00:11:03.193 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:03.193 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.4PR 00:11:03.193 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.193 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.193 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.193 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.4PR 00:11:03.193 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.4PR 00:11:03.451 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Hzy ]] 00:11:03.451 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Hzy 00:11:03.451 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.451 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.451 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.451 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Hzy 00:11:03.451 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Hzy 00:11:03.710 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:03.710 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.4Cs 00:11:03.710 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.710 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.710 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.710 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.4Cs 00:11:03.710 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.4Cs 00:11:03.969 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.8tL ]] 00:11:03.969 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8tL 00:11:03.969 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.969 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.969 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.969 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8tL 00:11:03.969 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8tL 00:11:04.227 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:04.227 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.VY8 00:11:04.227 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.227 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.227 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.227 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.VY8 00:11:04.227 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.VY8 00:11:04.486 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:11:04.486 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:04.486 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:04.486 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:04.486 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:04.486 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:04.745 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:11:04.745 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:04.745 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:04.745 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:04.745 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:04.745 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.745 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.745 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.745 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.745 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.745 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.745 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.745 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:05.004 00:11:05.004 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:05.004 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.004 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:05.262 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.262 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.262 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.262 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.262 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.262 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:05.262 { 00:11:05.262 "cntlid": 1, 00:11:05.262 "qid": 0, 00:11:05.262 "state": "enabled", 00:11:05.262 "thread": "nvmf_tgt_poll_group_000", 00:11:05.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:11:05.262 "listen_address": { 00:11:05.262 "trtype": "TCP", 00:11:05.262 "adrfam": "IPv4", 00:11:05.262 "traddr": "10.0.0.3", 00:11:05.262 "trsvcid": "4420" 00:11:05.262 }, 00:11:05.262 "peer_address": { 00:11:05.262 "trtype": "TCP", 00:11:05.262 "adrfam": "IPv4", 00:11:05.262 "traddr": "10.0.0.1", 00:11:05.262 "trsvcid": "46094" 00:11:05.262 }, 00:11:05.262 "auth": { 00:11:05.262 "state": "completed", 00:11:05.262 "digest": "sha256", 00:11:05.262 "dhgroup": "null" 00:11:05.262 } 00:11:05.262 } 00:11:05.262 ]' 00:11:05.262 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:05.262 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:05.262 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:05.522 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:05.522 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:05.522 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.522 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.522 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.781 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:11:05.781 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:11:10.020 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.020 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:10.020 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.020 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.020 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.020 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:10.020 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:10.020 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:10.020 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:11:10.020 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:10.020 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:10.020 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:10.020 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:10.020 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.020 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.020 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.020 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.020 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.020 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.020 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.020 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.279 00:11:10.279 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:10.279 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:10.279 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.537 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.538 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.538 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.538 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.538 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.538 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:10.538 { 00:11:10.538 "cntlid": 3, 00:11:10.538 "qid": 0, 00:11:10.538 "state": "enabled", 00:11:10.538 "thread": "nvmf_tgt_poll_group_000", 00:11:10.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:11:10.538 "listen_address": { 00:11:10.538 "trtype": "TCP", 00:11:10.538 "adrfam": "IPv4", 00:11:10.538 "traddr": "10.0.0.3", 00:11:10.538 "trsvcid": "4420" 00:11:10.538 }, 00:11:10.538 "peer_address": { 00:11:10.538 "trtype": "TCP", 00:11:10.538 "adrfam": "IPv4", 00:11:10.538 "traddr": "10.0.0.1", 00:11:10.538 "trsvcid": "53062" 00:11:10.538 }, 00:11:10.538 "auth": { 00:11:10.538 "state": "completed", 00:11:10.538 "digest": "sha256", 00:11:10.538 "dhgroup": "null" 00:11:10.538 } 00:11:10.538 } 00:11:10.538 ]' 00:11:10.538 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:10.538 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:10.538 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:10.538 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:10.538 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:10.538 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.538 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.538 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.107 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:11:11.107 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:11:11.676 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.676 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:11.676 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.676 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.676 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.676 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:11.676 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:11.676 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:11.936 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:11:11.936 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:11.936 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:11.936 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:11.936 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:11.936 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.936 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:11.936 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.936 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.936 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.936 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:11.936 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:11.936 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.195 00:11:12.195 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:12.195 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:12.195 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.455 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.455 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.455 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.455 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.455 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.455 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:12.455 { 00:11:12.455 "cntlid": 5, 00:11:12.455 "qid": 0, 00:11:12.455 "state": "enabled", 00:11:12.455 "thread": "nvmf_tgt_poll_group_000", 00:11:12.455 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:11:12.455 "listen_address": { 00:11:12.455 "trtype": "TCP", 00:11:12.455 "adrfam": "IPv4", 00:11:12.455 "traddr": "10.0.0.3", 00:11:12.455 "trsvcid": "4420" 00:11:12.455 }, 00:11:12.455 "peer_address": { 00:11:12.455 "trtype": "TCP", 00:11:12.455 "adrfam": "IPv4", 00:11:12.455 "traddr": "10.0.0.1", 00:11:12.455 "trsvcid": "53102" 00:11:12.455 }, 00:11:12.455 "auth": { 00:11:12.455 "state": "completed", 00:11:12.455 "digest": "sha256", 00:11:12.455 "dhgroup": "null" 00:11:12.455 } 00:11:12.455 } 00:11:12.455 ]' 00:11:12.455 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:12.455 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:12.455 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:12.715 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:12.715 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:12.715 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:12.715 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:12.715 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:12.974 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:11:12.974 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:11:13.543 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.543 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:13.543 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.543 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.543 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.543 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:13.543 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:13.543 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:13.814 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:11:13.814 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:13.814 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:13.814 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:13.814 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:13.814 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.814 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key3 00:11:13.814 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.814 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.814 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.814 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:13.814 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:13.814 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:14.384 00:11:14.384 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:14.384 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:14.384 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.384 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.384 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.384 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.384 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.643 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.643 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:14.643 { 00:11:14.643 "cntlid": 7, 00:11:14.643 "qid": 0, 00:11:14.643 "state": "enabled", 00:11:14.643 "thread": "nvmf_tgt_poll_group_000", 00:11:14.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:11:14.643 "listen_address": { 00:11:14.643 "trtype": "TCP", 00:11:14.643 "adrfam": "IPv4", 00:11:14.643 "traddr": "10.0.0.3", 00:11:14.643 "trsvcid": "4420" 00:11:14.643 }, 00:11:14.643 "peer_address": { 00:11:14.643 "trtype": "TCP", 00:11:14.643 "adrfam": "IPv4", 00:11:14.643 "traddr": "10.0.0.1", 00:11:14.643 "trsvcid": "53136" 00:11:14.643 }, 00:11:14.643 "auth": { 00:11:14.643 "state": "completed", 00:11:14.643 "digest": "sha256", 00:11:14.643 "dhgroup": "null" 00:11:14.643 } 00:11:14.643 } 00:11:14.643 ]' 00:11:14.643 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:14.643 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:14.643 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:14.643 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:14.643 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:14.643 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.643 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.643 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.903 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:11:14.903 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:11:15.471 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.472 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:15.472 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.472 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.472 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.472 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:15.472 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:15.472 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:15.472 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:15.731 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:11:15.731 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:15.731 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:15.731 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:15.731 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:15.731 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.731 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:15.731 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.731 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.731 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.731 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:15.731 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:15.731 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:16.300 00:11:16.300 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:16.300 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:16.300 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.300 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.300 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.300 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.300 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.300 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.300 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:16.300 { 00:11:16.300 "cntlid": 9, 00:11:16.300 "qid": 0, 00:11:16.300 "state": "enabled", 00:11:16.300 "thread": "nvmf_tgt_poll_group_000", 00:11:16.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:11:16.300 "listen_address": { 00:11:16.300 "trtype": "TCP", 00:11:16.300 "adrfam": "IPv4", 00:11:16.300 "traddr": "10.0.0.3", 00:11:16.300 "trsvcid": "4420" 00:11:16.300 }, 00:11:16.300 "peer_address": { 00:11:16.300 "trtype": "TCP", 00:11:16.300 "adrfam": "IPv4", 00:11:16.300 "traddr": "10.0.0.1", 00:11:16.300 "trsvcid": "47060" 00:11:16.300 }, 00:11:16.300 "auth": { 00:11:16.300 "state": "completed", 00:11:16.300 "digest": "sha256", 00:11:16.300 "dhgroup": "ffdhe2048" 00:11:16.300 } 00:11:16.300 } 00:11:16.300 ]' 00:11:16.300 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:16.559 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:16.559 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:16.559 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:16.559 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:16.559 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.559 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.559 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.818 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:11:16.818 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:11:17.385 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.385 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:17.385 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.385 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.385 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.385 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:17.385 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:17.385 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:17.643 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:11:17.643 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:17.643 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:17.643 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:17.643 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:17.643 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.643 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:17.643 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.643 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.643 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.643 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:17.643 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:17.643 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:17.901 00:11:17.901 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:17.901 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:17.901 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.159 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.159 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.159 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.159 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.159 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.159 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:18.159 { 00:11:18.159 "cntlid": 11, 00:11:18.159 "qid": 0, 00:11:18.159 "state": "enabled", 00:11:18.159 "thread": "nvmf_tgt_poll_group_000", 00:11:18.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:11:18.159 "listen_address": { 00:11:18.159 "trtype": "TCP", 00:11:18.159 "adrfam": "IPv4", 00:11:18.159 "traddr": "10.0.0.3", 00:11:18.159 "trsvcid": "4420" 00:11:18.159 }, 00:11:18.159 "peer_address": { 00:11:18.159 "trtype": "TCP", 00:11:18.159 "adrfam": "IPv4", 00:11:18.159 "traddr": "10.0.0.1", 00:11:18.159 "trsvcid": "47096" 00:11:18.159 }, 00:11:18.159 "auth": { 00:11:18.159 "state": "completed", 00:11:18.159 "digest": "sha256", 00:11:18.159 "dhgroup": "ffdhe2048" 00:11:18.159 } 00:11:18.159 } 00:11:18.159 ]' 00:11:18.159 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:18.418 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:18.418 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:18.418 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:18.418 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:18.418 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.418 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.418 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.675 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:11:18.675 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:11:19.240 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.240 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:19.240 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.240 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.240 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.240 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:19.240 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:19.240 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:19.499 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:11:19.499 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:19.499 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:19.499 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:19.499 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:19.499 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:19.499 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:19.499 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.499 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.499 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.499 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:19.499 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:19.499 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:20.065 00:11:20.065 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:20.065 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:20.065 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:20.324 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.324 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.324 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.324 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.324 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.324 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:20.324 { 00:11:20.324 "cntlid": 13, 00:11:20.324 "qid": 0, 00:11:20.324 "state": "enabled", 00:11:20.324 "thread": "nvmf_tgt_poll_group_000", 00:11:20.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:11:20.324 "listen_address": { 00:11:20.324 "trtype": "TCP", 00:11:20.324 "adrfam": "IPv4", 00:11:20.324 "traddr": "10.0.0.3", 00:11:20.324 "trsvcid": "4420" 00:11:20.324 }, 00:11:20.324 "peer_address": { 00:11:20.324 "trtype": "TCP", 00:11:20.324 "adrfam": "IPv4", 00:11:20.324 "traddr": "10.0.0.1", 00:11:20.324 "trsvcid": "47116" 00:11:20.324 }, 00:11:20.324 "auth": { 00:11:20.324 "state": "completed", 00:11:20.324 "digest": "sha256", 00:11:20.324 "dhgroup": "ffdhe2048" 00:11:20.324 } 00:11:20.324 } 00:11:20.324 ]' 00:11:20.324 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:20.324 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:20.324 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:20.324 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:20.324 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:20.324 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.324 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.324 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:20.582 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:11:20.582 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:11:21.148 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:21.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:21.407 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:21.407 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.407 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.407 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.408 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:21.408 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:21.408 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:21.408 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:11:21.408 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:21.408 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:21.408 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:21.408 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:21.408 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:21.408 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key3 00:11:21.408 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.408 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.667 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.667 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:21.667 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:21.667 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:21.926 00:11:21.926 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:21.926 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:21.926 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.185 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.185 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.185 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.185 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.185 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.185 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:22.185 { 00:11:22.185 "cntlid": 15, 00:11:22.185 "qid": 0, 00:11:22.185 "state": "enabled", 00:11:22.185 "thread": "nvmf_tgt_poll_group_000", 00:11:22.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:11:22.185 "listen_address": { 00:11:22.185 "trtype": "TCP", 00:11:22.185 "adrfam": "IPv4", 00:11:22.185 "traddr": "10.0.0.3", 00:11:22.185 "trsvcid": "4420" 00:11:22.185 }, 00:11:22.185 "peer_address": { 00:11:22.185 "trtype": "TCP", 00:11:22.185 "adrfam": "IPv4", 00:11:22.185 "traddr": "10.0.0.1", 00:11:22.185 "trsvcid": "47150" 00:11:22.185 }, 00:11:22.185 "auth": { 00:11:22.185 "state": "completed", 00:11:22.185 "digest": "sha256", 00:11:22.185 "dhgroup": "ffdhe2048" 00:11:22.185 } 00:11:22.185 } 00:11:22.185 ]' 00:11:22.185 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:22.185 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:22.185 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:22.185 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:22.185 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:22.444 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.444 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.444 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.702 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:11:22.702 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:11:23.281 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.281 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:23.281 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.281 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.281 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.281 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:23.281 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:23.281 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:23.281 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:23.281 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:11:23.281 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:23.281 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:23.281 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:23.281 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:23.281 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.281 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:23.281 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.281 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.281 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.281 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:23.281 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:23.281 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:23.862 00:11:23.862 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:23.862 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:23.862 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.862 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.862 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.862 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.862 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.121 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.121 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:24.121 { 00:11:24.121 "cntlid": 17, 00:11:24.121 "qid": 0, 00:11:24.121 "state": "enabled", 00:11:24.121 "thread": "nvmf_tgt_poll_group_000", 00:11:24.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:11:24.121 "listen_address": { 00:11:24.121 "trtype": "TCP", 00:11:24.121 "adrfam": "IPv4", 00:11:24.121 "traddr": "10.0.0.3", 00:11:24.121 "trsvcid": "4420" 00:11:24.121 }, 00:11:24.121 "peer_address": { 00:11:24.121 "trtype": "TCP", 00:11:24.121 "adrfam": "IPv4", 00:11:24.121 "traddr": "10.0.0.1", 00:11:24.121 "trsvcid": "47162" 00:11:24.121 }, 00:11:24.121 "auth": { 00:11:24.121 "state": "completed", 00:11:24.121 "digest": "sha256", 00:11:24.121 "dhgroup": "ffdhe3072" 00:11:24.121 } 00:11:24.121 } 00:11:24.121 ]' 00:11:24.121 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:24.121 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:24.121 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:24.121 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:24.121 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:24.121 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.121 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.121 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.380 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:11:24.380 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:11:24.949 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.949 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:24.949 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.949 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.949 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.949 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:24.949 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:24.949 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:25.209 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:11:25.209 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:25.209 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:25.209 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:25.209 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:25.209 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.209 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:25.209 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.209 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.209 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.209 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:25.209 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:25.209 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:25.469 00:11:25.469 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:25.469 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:25.469 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.037 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.037 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.037 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.037 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.037 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.037 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:26.037 { 00:11:26.037 "cntlid": 19, 00:11:26.037 "qid": 0, 00:11:26.037 "state": "enabled", 00:11:26.037 "thread": "nvmf_tgt_poll_group_000", 00:11:26.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:11:26.038 "listen_address": { 00:11:26.038 "trtype": "TCP", 00:11:26.038 "adrfam": "IPv4", 00:11:26.038 "traddr": "10.0.0.3", 00:11:26.038 "trsvcid": "4420" 00:11:26.038 }, 00:11:26.038 "peer_address": { 00:11:26.038 "trtype": "TCP", 00:11:26.038 "adrfam": "IPv4", 00:11:26.038 "traddr": "10.0.0.1", 00:11:26.038 "trsvcid": "45484" 00:11:26.038 }, 00:11:26.038 "auth": { 00:11:26.038 "state": "completed", 00:11:26.038 "digest": "sha256", 00:11:26.038 "dhgroup": "ffdhe3072" 00:11:26.038 } 00:11:26.038 } 00:11:26.038 ]' 00:11:26.038 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:26.038 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:26.038 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:26.038 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:26.038 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:26.038 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.038 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.038 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.605 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:11:26.605 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:11:27.173 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.173 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:27.173 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.173 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.173 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.173 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:27.173 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:27.173 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:27.173 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:11:27.173 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:27.173 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:27.173 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:27.173 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:27.173 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.173 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:27.173 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.173 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.174 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.174 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:27.174 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:27.174 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:27.740 00:11:27.740 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:27.740 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:27.740 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.998 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.998 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.998 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.998 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.999 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.999 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:27.999 { 00:11:27.999 "cntlid": 21, 00:11:27.999 "qid": 0, 00:11:27.999 "state": "enabled", 00:11:27.999 "thread": "nvmf_tgt_poll_group_000", 00:11:27.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:11:27.999 "listen_address": { 00:11:27.999 "trtype": "TCP", 00:11:27.999 "adrfam": "IPv4", 00:11:27.999 "traddr": "10.0.0.3", 00:11:27.999 "trsvcid": "4420" 00:11:27.999 }, 00:11:27.999 "peer_address": { 00:11:27.999 "trtype": "TCP", 00:11:27.999 "adrfam": "IPv4", 00:11:27.999 "traddr": "10.0.0.1", 00:11:27.999 "trsvcid": "45520" 00:11:27.999 }, 00:11:27.999 "auth": { 00:11:27.999 "state": "completed", 00:11:27.999 "digest": "sha256", 00:11:27.999 "dhgroup": "ffdhe3072" 00:11:27.999 } 00:11:27.999 } 00:11:27.999 ]' 00:11:27.999 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:27.999 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:27.999 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:27.999 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:27.999 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:27.999 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.999 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.999 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.257 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:11:28.257 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:11:28.823 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.823 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:28.823 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.823 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.823 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.823 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:28.823 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:28.823 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:29.081 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:11:29.081 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:29.081 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:29.081 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:29.081 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:29.081 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.081 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key3 00:11:29.081 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.081 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.081 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.081 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:29.081 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:29.081 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:29.340 00:11:29.340 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:29.340 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.340 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:29.598 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.598 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.598 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.599 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.599 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.599 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:29.599 { 00:11:29.599 "cntlid": 23, 00:11:29.599 "qid": 0, 00:11:29.599 "state": "enabled", 00:11:29.599 "thread": "nvmf_tgt_poll_group_000", 00:11:29.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:11:29.599 "listen_address": { 00:11:29.599 "trtype": "TCP", 00:11:29.599 "adrfam": "IPv4", 00:11:29.599 "traddr": "10.0.0.3", 00:11:29.599 "trsvcid": "4420" 00:11:29.599 }, 00:11:29.599 "peer_address": { 00:11:29.599 "trtype": "TCP", 00:11:29.599 "adrfam": "IPv4", 00:11:29.599 "traddr": "10.0.0.1", 00:11:29.599 "trsvcid": "45558" 00:11:29.599 }, 00:11:29.599 "auth": { 00:11:29.599 "state": "completed", 00:11:29.599 "digest": "sha256", 00:11:29.599 "dhgroup": "ffdhe3072" 00:11:29.599 } 00:11:29.599 } 00:11:29.599 ]' 00:11:29.599 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:29.599 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:29.599 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:29.857 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:29.857 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:29.857 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.857 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.857 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.116 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:11:30.116 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:11:30.685 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.685 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:30.685 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.685 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.685 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.685 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:30.685 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:30.685 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:30.686 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:30.946 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:11:30.946 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:30.946 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:30.946 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:30.946 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:30.946 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.946 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.946 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.946 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.946 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.946 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.946 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.946 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:31.515 00:11:31.516 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:31.516 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.516 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:31.775 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.775 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.775 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.775 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.775 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.775 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:31.775 { 00:11:31.775 "cntlid": 25, 00:11:31.775 "qid": 0, 00:11:31.775 "state": "enabled", 00:11:31.775 "thread": "nvmf_tgt_poll_group_000", 00:11:31.775 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:11:31.775 "listen_address": { 00:11:31.775 "trtype": "TCP", 00:11:31.775 "adrfam": "IPv4", 00:11:31.775 "traddr": "10.0.0.3", 00:11:31.775 "trsvcid": "4420" 00:11:31.775 }, 00:11:31.775 "peer_address": { 00:11:31.775 "trtype": "TCP", 00:11:31.775 "adrfam": "IPv4", 00:11:31.775 "traddr": "10.0.0.1", 00:11:31.775 "trsvcid": "45572" 00:11:31.775 }, 00:11:31.775 "auth": { 00:11:31.775 "state": "completed", 00:11:31.775 "digest": "sha256", 00:11:31.775 "dhgroup": "ffdhe4096" 00:11:31.775 } 00:11:31.775 } 00:11:31.775 ]' 00:11:31.775 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:31.775 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:31.775 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:31.775 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:31.775 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:31.775 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.775 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.775 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.034 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:11:32.034 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:11:32.603 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.603 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:32.603 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.603 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.603 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.603 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:32.603 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:32.603 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:32.863 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:11:32.863 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:32.863 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:32.863 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:32.863 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:32.863 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.863 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.863 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.863 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.863 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.863 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.863 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.863 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:33.431 00:11:33.431 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:33.431 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:33.431 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.689 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.689 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.689 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.689 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.689 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.689 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:33.689 { 00:11:33.689 "cntlid": 27, 00:11:33.689 "qid": 0, 00:11:33.689 "state": "enabled", 00:11:33.689 "thread": "nvmf_tgt_poll_group_000", 00:11:33.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:11:33.689 "listen_address": { 00:11:33.689 "trtype": "TCP", 00:11:33.689 "adrfam": "IPv4", 00:11:33.689 "traddr": "10.0.0.3", 00:11:33.689 "trsvcid": "4420" 00:11:33.689 }, 00:11:33.689 "peer_address": { 00:11:33.689 "trtype": "TCP", 00:11:33.689 "adrfam": "IPv4", 00:11:33.689 "traddr": "10.0.0.1", 00:11:33.689 "trsvcid": "45608" 00:11:33.689 }, 00:11:33.689 "auth": { 00:11:33.689 "state": "completed", 00:11:33.689 "digest": "sha256", 00:11:33.689 "dhgroup": "ffdhe4096" 00:11:33.689 } 00:11:33.689 } 00:11:33.689 ]' 00:11:33.689 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:33.689 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:33.689 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:33.689 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:33.690 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:33.690 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.690 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.690 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.256 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:11:34.257 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:11:34.824 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.824 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:34.824 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.824 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.824 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.824 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:34.824 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:34.824 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:35.083 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:11:35.083 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:35.083 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:35.083 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:35.083 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:35.083 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.083 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:35.083 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.083 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.083 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.083 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:35.083 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:35.083 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:35.341 00:11:35.341 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:35.341 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:35.341 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.600 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.600 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.600 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.600 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.600 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.600 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:35.600 { 00:11:35.600 "cntlid": 29, 00:11:35.600 "qid": 0, 00:11:35.600 "state": "enabled", 00:11:35.600 "thread": "nvmf_tgt_poll_group_000", 00:11:35.600 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:11:35.600 "listen_address": { 00:11:35.600 "trtype": "TCP", 00:11:35.600 "adrfam": "IPv4", 00:11:35.600 "traddr": "10.0.0.3", 00:11:35.600 "trsvcid": "4420" 00:11:35.600 }, 00:11:35.600 "peer_address": { 00:11:35.600 "trtype": "TCP", 00:11:35.600 "adrfam": "IPv4", 00:11:35.600 "traddr": "10.0.0.1", 00:11:35.600 "trsvcid": "51128" 00:11:35.600 }, 00:11:35.600 "auth": { 00:11:35.600 "state": "completed", 00:11:35.600 "digest": "sha256", 00:11:35.600 "dhgroup": "ffdhe4096" 00:11:35.600 } 00:11:35.600 } 00:11:35.600 ]' 00:11:35.600 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:35.859 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:35.859 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:35.859 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:35.859 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:35.859 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.859 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.859 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.117 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:11:36.117 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:11:36.684 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.684 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:36.684 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.684 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.684 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.684 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:36.684 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:36.684 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:36.943 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:11:36.943 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:36.943 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:36.943 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:36.943 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:36.943 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.943 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key3 00:11:36.943 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.943 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.943 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.943 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:36.943 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:36.943 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:37.201 00:11:37.201 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:37.201 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:37.201 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.462 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.462 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.462 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.462 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.462 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.462 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:37.462 { 00:11:37.462 "cntlid": 31, 00:11:37.462 "qid": 0, 00:11:37.462 "state": "enabled", 00:11:37.462 "thread": "nvmf_tgt_poll_group_000", 00:11:37.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:11:37.462 "listen_address": { 00:11:37.462 "trtype": "TCP", 00:11:37.462 "adrfam": "IPv4", 00:11:37.462 "traddr": "10.0.0.3", 00:11:37.462 "trsvcid": "4420" 00:11:37.462 }, 00:11:37.462 "peer_address": { 00:11:37.462 "trtype": "TCP", 00:11:37.462 "adrfam": "IPv4", 00:11:37.462 "traddr": "10.0.0.1", 00:11:37.462 "trsvcid": "51148" 00:11:37.462 }, 00:11:37.462 "auth": { 00:11:37.462 "state": "completed", 00:11:37.462 "digest": "sha256", 00:11:37.462 "dhgroup": "ffdhe4096" 00:11:37.462 } 00:11:37.462 } 00:11:37.462 ]' 00:11:37.463 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:37.735 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:37.735 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:37.736 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:37.736 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:37.736 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.736 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.736 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.008 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:11:38.008 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:11:38.577 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.577 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:38.577 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.577 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.577 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.577 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:38.577 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:38.577 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:38.577 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:38.836 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:11:38.836 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:38.836 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:38.836 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:38.836 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:38.836 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.836 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.836 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.836 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.836 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.836 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.836 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.836 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.403 00:11:39.404 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:39.404 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.404 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:39.404 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.404 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.404 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.404 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.404 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.404 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:39.404 { 00:11:39.404 "cntlid": 33, 00:11:39.404 "qid": 0, 00:11:39.404 "state": "enabled", 00:11:39.404 "thread": "nvmf_tgt_poll_group_000", 00:11:39.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:11:39.404 "listen_address": { 00:11:39.404 "trtype": "TCP", 00:11:39.404 "adrfam": "IPv4", 00:11:39.404 "traddr": "10.0.0.3", 00:11:39.404 "trsvcid": "4420" 00:11:39.404 }, 00:11:39.404 "peer_address": { 00:11:39.404 "trtype": "TCP", 00:11:39.404 "adrfam": "IPv4", 00:11:39.404 "traddr": "10.0.0.1", 00:11:39.404 "trsvcid": "51180" 00:11:39.404 }, 00:11:39.404 "auth": { 00:11:39.404 "state": "completed", 00:11:39.404 "digest": "sha256", 00:11:39.404 "dhgroup": "ffdhe6144" 00:11:39.404 } 00:11:39.404 } 00:11:39.404 ]' 00:11:39.404 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:39.663 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:39.663 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:39.663 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:39.663 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:39.663 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.663 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.663 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.922 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:11:39.922 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:11:40.490 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.490 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:40.490 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.490 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.490 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.490 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:40.490 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:40.490 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:40.750 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:11:40.750 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:40.750 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:40.750 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:40.750 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:40.750 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.750 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.750 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.750 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.750 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.750 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.750 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.750 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.318 00:11:41.318 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:41.318 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:41.318 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.577 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.577 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.577 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.577 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.577 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.577 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:41.577 { 00:11:41.577 "cntlid": 35, 00:11:41.577 "qid": 0, 00:11:41.577 "state": "enabled", 00:11:41.577 "thread": "nvmf_tgt_poll_group_000", 00:11:41.577 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:11:41.577 "listen_address": { 00:11:41.577 "trtype": "TCP", 00:11:41.577 "adrfam": "IPv4", 00:11:41.577 "traddr": "10.0.0.3", 00:11:41.577 "trsvcid": "4420" 00:11:41.577 }, 00:11:41.577 "peer_address": { 00:11:41.577 "trtype": "TCP", 00:11:41.577 "adrfam": "IPv4", 00:11:41.577 "traddr": "10.0.0.1", 00:11:41.577 "trsvcid": "51206" 00:11:41.577 }, 00:11:41.577 "auth": { 00:11:41.577 "state": "completed", 00:11:41.577 "digest": "sha256", 00:11:41.577 "dhgroup": "ffdhe6144" 00:11:41.577 } 00:11:41.577 } 00:11:41.577 ]' 00:11:41.577 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:41.577 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:41.577 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:41.577 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:41.577 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:41.837 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.837 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.837 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.097 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:11:42.097 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:11:42.665 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.665 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:42.665 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.665 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.665 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.665 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:42.666 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:42.666 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:42.925 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:11:42.925 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:42.925 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:42.925 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:42.925 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:42.925 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.925 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.925 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.925 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.925 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.925 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.925 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.925 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:43.493 00:11:43.493 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:43.493 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:43.493 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.753 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.753 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.753 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.753 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.753 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.753 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:43.753 { 00:11:43.753 "cntlid": 37, 00:11:43.753 "qid": 0, 00:11:43.753 "state": "enabled", 00:11:43.753 "thread": "nvmf_tgt_poll_group_000", 00:11:43.753 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:11:43.753 "listen_address": { 00:11:43.753 "trtype": "TCP", 00:11:43.753 "adrfam": "IPv4", 00:11:43.753 "traddr": "10.0.0.3", 00:11:43.753 "trsvcid": "4420" 00:11:43.753 }, 00:11:43.753 "peer_address": { 00:11:43.753 "trtype": "TCP", 00:11:43.753 "adrfam": "IPv4", 00:11:43.753 "traddr": "10.0.0.1", 00:11:43.753 "trsvcid": "51240" 00:11:43.753 }, 00:11:43.753 "auth": { 00:11:43.753 "state": "completed", 00:11:43.753 "digest": "sha256", 00:11:43.753 "dhgroup": "ffdhe6144" 00:11:43.753 } 00:11:43.753 } 00:11:43.753 ]' 00:11:43.753 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:43.753 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:43.753 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:43.753 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:43.753 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:43.753 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.753 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.753 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.012 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:11:44.012 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:11:44.952 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.952 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:44.952 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.952 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.952 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.952 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:44.952 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:44.952 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:44.952 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:11:44.952 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:44.952 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:44.952 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:44.952 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:44.952 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.952 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key3 00:11:44.952 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.952 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.212 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.212 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:45.212 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:45.212 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:45.488 00:11:45.488 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:45.488 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:45.488 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.059 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.059 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.059 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.059 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.059 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.059 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:46.059 { 00:11:46.059 "cntlid": 39, 00:11:46.059 "qid": 0, 00:11:46.059 "state": "enabled", 00:11:46.059 "thread": "nvmf_tgt_poll_group_000", 00:11:46.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:11:46.059 "listen_address": { 00:11:46.059 "trtype": "TCP", 00:11:46.059 "adrfam": "IPv4", 00:11:46.059 "traddr": "10.0.0.3", 00:11:46.059 "trsvcid": "4420" 00:11:46.059 }, 00:11:46.059 "peer_address": { 00:11:46.059 "trtype": "TCP", 00:11:46.059 "adrfam": "IPv4", 00:11:46.059 "traddr": "10.0.0.1", 00:11:46.059 "trsvcid": "33816" 00:11:46.059 }, 00:11:46.059 "auth": { 00:11:46.059 "state": "completed", 00:11:46.059 "digest": "sha256", 00:11:46.059 "dhgroup": "ffdhe6144" 00:11:46.059 } 00:11:46.059 } 00:11:46.059 ]' 00:11:46.059 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:46.059 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:46.059 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:46.059 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:46.059 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:46.059 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.059 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.059 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.319 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:11:46.319 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:11:46.888 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.888 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:46.888 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.888 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.888 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.888 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:46.888 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:46.888 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:46.888 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:47.147 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:11:47.147 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:47.147 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:47.147 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:47.147 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:47.147 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.147 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:47.147 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.147 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.147 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.147 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:47.147 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:47.147 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:47.714 00:11:47.714 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:47.714 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.714 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:47.973 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.973 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.973 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.973 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.973 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.973 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:47.973 { 00:11:47.973 "cntlid": 41, 00:11:47.973 "qid": 0, 00:11:47.973 "state": "enabled", 00:11:47.973 "thread": "nvmf_tgt_poll_group_000", 00:11:47.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:11:47.973 "listen_address": { 00:11:47.973 "trtype": "TCP", 00:11:47.973 "adrfam": "IPv4", 00:11:47.973 "traddr": "10.0.0.3", 00:11:47.973 "trsvcid": "4420" 00:11:47.973 }, 00:11:47.973 "peer_address": { 00:11:47.973 "trtype": "TCP", 00:11:47.973 "adrfam": "IPv4", 00:11:47.973 "traddr": "10.0.0.1", 00:11:47.973 "trsvcid": "33860" 00:11:47.973 }, 00:11:47.973 "auth": { 00:11:47.973 "state": "completed", 00:11:47.973 "digest": "sha256", 00:11:47.973 "dhgroup": "ffdhe8192" 00:11:47.973 } 00:11:47.973 } 00:11:47.973 ]' 00:11:47.973 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:48.239 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:48.239 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:48.239 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:48.239 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:48.239 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.239 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.239 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.500 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:11:48.500 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:11:49.067 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.067 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:49.067 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.067 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.326 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.326 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:49.326 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:49.326 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:49.584 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:11:49.584 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:49.584 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:49.584 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:49.584 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:49.584 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.584 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:49.584 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.584 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.584 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.584 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:49.584 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:49.585 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:50.152 00:11:50.152 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:50.152 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:50.152 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.411 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.411 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.411 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.411 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.411 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.411 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:50.411 { 00:11:50.411 "cntlid": 43, 00:11:50.411 "qid": 0, 00:11:50.411 "state": "enabled", 00:11:50.411 "thread": "nvmf_tgt_poll_group_000", 00:11:50.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:11:50.411 "listen_address": { 00:11:50.411 "trtype": "TCP", 00:11:50.411 "adrfam": "IPv4", 00:11:50.411 "traddr": "10.0.0.3", 00:11:50.411 "trsvcid": "4420" 00:11:50.411 }, 00:11:50.411 "peer_address": { 00:11:50.411 "trtype": "TCP", 00:11:50.411 "adrfam": "IPv4", 00:11:50.411 "traddr": "10.0.0.1", 00:11:50.411 "trsvcid": "33876" 00:11:50.411 }, 00:11:50.411 "auth": { 00:11:50.411 "state": "completed", 00:11:50.411 "digest": "sha256", 00:11:50.411 "dhgroup": "ffdhe8192" 00:11:50.411 } 00:11:50.411 } 00:11:50.411 ]' 00:11:50.411 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:50.411 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:50.411 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:50.411 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:50.411 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:50.411 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.411 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.412 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.671 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:11:50.671 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:11:51.610 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.610 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:51.610 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.610 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.610 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.610 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:51.610 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:51.610 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:51.876 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:11:51.876 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:51.876 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:51.876 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:51.876 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:51.876 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.876 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:51.876 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.876 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.876 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.876 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:51.876 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:51.876 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:52.482 00:11:52.482 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:52.483 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.483 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:52.483 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.741 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.741 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.741 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.741 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.741 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:52.741 { 00:11:52.742 "cntlid": 45, 00:11:52.742 "qid": 0, 00:11:52.742 "state": "enabled", 00:11:52.742 "thread": "nvmf_tgt_poll_group_000", 00:11:52.742 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:11:52.742 "listen_address": { 00:11:52.742 "trtype": "TCP", 00:11:52.742 "adrfam": "IPv4", 00:11:52.742 "traddr": "10.0.0.3", 00:11:52.742 "trsvcid": "4420" 00:11:52.742 }, 00:11:52.742 "peer_address": { 00:11:52.742 "trtype": "TCP", 00:11:52.742 "adrfam": "IPv4", 00:11:52.742 "traddr": "10.0.0.1", 00:11:52.742 "trsvcid": "33916" 00:11:52.742 }, 00:11:52.742 "auth": { 00:11:52.742 "state": "completed", 00:11:52.742 "digest": "sha256", 00:11:52.742 "dhgroup": "ffdhe8192" 00:11:52.742 } 00:11:52.742 } 00:11:52.742 ]' 00:11:52.742 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:52.742 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:52.742 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:52.742 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:52.742 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:52.742 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.742 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.742 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.001 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:11:53.001 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:11:53.587 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.847 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:53.847 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.847 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.847 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.847 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:53.847 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:53.847 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:54.105 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:11:54.105 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:54.105 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:54.105 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:54.105 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:54.105 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.105 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key3 00:11:54.105 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.105 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.105 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.105 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:54.105 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:54.105 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:54.672 00:11:54.672 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:54.672 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:54.672 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.931 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.931 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.931 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.931 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.931 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.931 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:54.931 { 00:11:54.931 "cntlid": 47, 00:11:54.931 "qid": 0, 00:11:54.931 "state": "enabled", 00:11:54.931 "thread": "nvmf_tgt_poll_group_000", 00:11:54.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:11:54.931 "listen_address": { 00:11:54.931 "trtype": "TCP", 00:11:54.931 "adrfam": "IPv4", 00:11:54.931 "traddr": "10.0.0.3", 00:11:54.931 "trsvcid": "4420" 00:11:54.931 }, 00:11:54.931 "peer_address": { 00:11:54.931 "trtype": "TCP", 00:11:54.931 "adrfam": "IPv4", 00:11:54.931 "traddr": "10.0.0.1", 00:11:54.931 "trsvcid": "33942" 00:11:54.931 }, 00:11:54.931 "auth": { 00:11:54.931 "state": "completed", 00:11:54.931 "digest": "sha256", 00:11:54.931 "dhgroup": "ffdhe8192" 00:11:54.931 } 00:11:54.931 } 00:11:54.931 ]' 00:11:54.931 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:54.931 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:54.931 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:54.931 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:54.931 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:55.189 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.189 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.189 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.189 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:11:55.189 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:11:56.126 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.126 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:56.126 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.126 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.126 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.126 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:56.126 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:56.126 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:56.126 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:56.126 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:56.126 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:11:56.126 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:56.126 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:56.126 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:56.126 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:56.126 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.126 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:56.385 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.385 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.385 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.385 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:56.385 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:56.385 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:56.644 00:11:56.645 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:56.645 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.645 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:56.904 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.904 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.904 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.904 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.904 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.904 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:56.904 { 00:11:56.904 "cntlid": 49, 00:11:56.904 "qid": 0, 00:11:56.904 "state": "enabled", 00:11:56.904 "thread": "nvmf_tgt_poll_group_000", 00:11:56.904 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:11:56.904 "listen_address": { 00:11:56.904 "trtype": "TCP", 00:11:56.904 "adrfam": "IPv4", 00:11:56.904 "traddr": "10.0.0.3", 00:11:56.904 "trsvcid": "4420" 00:11:56.904 }, 00:11:56.904 "peer_address": { 00:11:56.904 "trtype": "TCP", 00:11:56.904 "adrfam": "IPv4", 00:11:56.904 "traddr": "10.0.0.1", 00:11:56.904 "trsvcid": "58164" 00:11:56.904 }, 00:11:56.904 "auth": { 00:11:56.904 "state": "completed", 00:11:56.904 "digest": "sha384", 00:11:56.904 "dhgroup": "null" 00:11:56.904 } 00:11:56.904 } 00:11:56.904 ]' 00:11:56.904 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:56.904 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:56.904 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:57.163 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:57.163 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:57.163 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.163 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.163 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.422 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:11:57.422 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:11:57.988 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.988 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:57.988 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.988 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.988 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.988 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:57.988 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:57.988 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:58.247 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:11:58.247 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:58.247 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:58.247 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:58.247 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:58.247 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.247 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.247 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.247 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.247 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.247 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.247 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.247 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.507 00:11:58.507 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:58.507 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:58.507 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.766 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.766 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.766 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.766 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.766 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.766 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:58.766 { 00:11:58.766 "cntlid": 51, 00:11:58.766 "qid": 0, 00:11:58.766 "state": "enabled", 00:11:58.766 "thread": "nvmf_tgt_poll_group_000", 00:11:58.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:11:58.766 "listen_address": { 00:11:58.766 "trtype": "TCP", 00:11:58.766 "adrfam": "IPv4", 00:11:58.766 "traddr": "10.0.0.3", 00:11:58.766 "trsvcid": "4420" 00:11:58.766 }, 00:11:58.766 "peer_address": { 00:11:58.766 "trtype": "TCP", 00:11:58.766 "adrfam": "IPv4", 00:11:58.766 "traddr": "10.0.0.1", 00:11:58.766 "trsvcid": "58200" 00:11:58.766 }, 00:11:58.766 "auth": { 00:11:58.766 "state": "completed", 00:11:58.766 "digest": "sha384", 00:11:58.766 "dhgroup": "null" 00:11:58.766 } 00:11:58.766 } 00:11:58.766 ]' 00:11:58.766 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:58.766 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:58.767 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:59.026 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:59.026 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:59.026 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.026 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.026 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.285 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:11:59.286 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:11:59.854 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.854 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:11:59.854 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.854 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.854 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.854 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:59.854 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:59.854 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:00.115 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:12:00.115 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:00.115 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:00.115 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:00.115 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:00.115 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.115 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:00.115 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.115 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.115 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.115 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:00.115 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:00.115 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:00.374 00:12:00.374 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:00.374 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:00.374 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.943 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.943 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.943 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.943 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.943 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.943 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:00.943 { 00:12:00.943 "cntlid": 53, 00:12:00.943 "qid": 0, 00:12:00.943 "state": "enabled", 00:12:00.943 "thread": "nvmf_tgt_poll_group_000", 00:12:00.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:00.943 "listen_address": { 00:12:00.943 "trtype": "TCP", 00:12:00.943 "adrfam": "IPv4", 00:12:00.943 "traddr": "10.0.0.3", 00:12:00.943 "trsvcid": "4420" 00:12:00.943 }, 00:12:00.943 "peer_address": { 00:12:00.943 "trtype": "TCP", 00:12:00.943 "adrfam": "IPv4", 00:12:00.943 "traddr": "10.0.0.1", 00:12:00.943 "trsvcid": "58224" 00:12:00.943 }, 00:12:00.943 "auth": { 00:12:00.943 "state": "completed", 00:12:00.943 "digest": "sha384", 00:12:00.943 "dhgroup": "null" 00:12:00.943 } 00:12:00.943 } 00:12:00.943 ]' 00:12:00.943 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:00.943 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:00.943 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:00.943 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:00.943 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:00.943 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.943 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.943 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.202 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:12:01.202 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:12:01.770 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.770 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:01.770 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.770 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.770 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.770 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:01.770 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:01.770 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:02.339 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:12:02.339 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:02.339 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:02.339 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:02.339 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:02.339 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.339 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key3 00:12:02.339 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.339 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.339 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.339 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:02.339 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:02.339 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:02.597 00:12:02.598 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:02.598 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:02.598 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.857 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.857 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.857 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.857 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.857 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.857 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:02.857 { 00:12:02.857 "cntlid": 55, 00:12:02.857 "qid": 0, 00:12:02.857 "state": "enabled", 00:12:02.857 "thread": "nvmf_tgt_poll_group_000", 00:12:02.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:02.857 "listen_address": { 00:12:02.857 "trtype": "TCP", 00:12:02.857 "adrfam": "IPv4", 00:12:02.857 "traddr": "10.0.0.3", 00:12:02.857 "trsvcid": "4420" 00:12:02.857 }, 00:12:02.857 "peer_address": { 00:12:02.857 "trtype": "TCP", 00:12:02.857 "adrfam": "IPv4", 00:12:02.857 "traddr": "10.0.0.1", 00:12:02.857 "trsvcid": "58256" 00:12:02.857 }, 00:12:02.857 "auth": { 00:12:02.857 "state": "completed", 00:12:02.857 "digest": "sha384", 00:12:02.857 "dhgroup": "null" 00:12:02.857 } 00:12:02.857 } 00:12:02.857 ]' 00:12:02.857 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:02.857 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:02.857 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:02.857 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:02.857 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:03.116 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.116 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.117 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.117 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:12:03.117 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:12:03.683 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.942 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:03.942 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.942 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.942 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.942 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:03.942 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:03.942 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:03.942 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:04.200 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:12:04.200 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:04.200 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:04.200 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:04.200 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:04.200 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.200 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.200 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.200 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.200 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.200 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.200 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.200 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.459 00:12:04.459 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:04.459 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:04.459 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.718 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.718 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.718 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.718 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.718 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.718 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:04.718 { 00:12:04.718 "cntlid": 57, 00:12:04.718 "qid": 0, 00:12:04.718 "state": "enabled", 00:12:04.718 "thread": "nvmf_tgt_poll_group_000", 00:12:04.718 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:04.718 "listen_address": { 00:12:04.718 "trtype": "TCP", 00:12:04.718 "adrfam": "IPv4", 00:12:04.718 "traddr": "10.0.0.3", 00:12:04.718 "trsvcid": "4420" 00:12:04.718 }, 00:12:04.718 "peer_address": { 00:12:04.718 "trtype": "TCP", 00:12:04.718 "adrfam": "IPv4", 00:12:04.718 "traddr": "10.0.0.1", 00:12:04.718 "trsvcid": "58278" 00:12:04.718 }, 00:12:04.718 "auth": { 00:12:04.718 "state": "completed", 00:12:04.718 "digest": "sha384", 00:12:04.718 "dhgroup": "ffdhe2048" 00:12:04.718 } 00:12:04.718 } 00:12:04.718 ]' 00:12:04.718 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:04.718 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:04.718 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:04.718 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:04.718 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:04.718 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.718 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.718 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.977 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:12:04.977 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:12:05.914 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.914 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:05.914 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.914 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.914 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.914 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:05.914 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:05.914 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:05.914 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:12:05.914 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:05.914 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:05.914 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:05.914 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:05.914 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.914 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.914 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.914 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.914 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.914 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.914 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.914 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.482 00:12:06.482 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:06.482 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:06.482 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.754 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.754 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.754 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.754 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.754 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.754 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:06.754 { 00:12:06.754 "cntlid": 59, 00:12:06.754 "qid": 0, 00:12:06.754 "state": "enabled", 00:12:06.754 "thread": "nvmf_tgt_poll_group_000", 00:12:06.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:06.754 "listen_address": { 00:12:06.754 "trtype": "TCP", 00:12:06.754 "adrfam": "IPv4", 00:12:06.754 "traddr": "10.0.0.3", 00:12:06.754 "trsvcid": "4420" 00:12:06.754 }, 00:12:06.754 "peer_address": { 00:12:06.754 "trtype": "TCP", 00:12:06.754 "adrfam": "IPv4", 00:12:06.754 "traddr": "10.0.0.1", 00:12:06.754 "trsvcid": "35572" 00:12:06.754 }, 00:12:06.754 "auth": { 00:12:06.754 "state": "completed", 00:12:06.754 "digest": "sha384", 00:12:06.754 "dhgroup": "ffdhe2048" 00:12:06.754 } 00:12:06.754 } 00:12:06.754 ]' 00:12:06.754 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:06.754 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:06.754 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:06.754 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:06.754 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:06.754 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.754 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.754 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.038 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:12:07.038 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:12:07.619 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.620 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:07.620 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.620 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.620 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.620 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:07.620 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:07.620 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:07.878 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:12:07.878 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:07.878 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:07.878 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:07.878 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:07.878 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.878 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.878 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.878 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.878 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.878 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.878 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.878 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.137 00:12:08.137 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:08.137 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:08.137 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.396 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.396 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.396 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.396 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.396 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.396 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:08.396 { 00:12:08.396 "cntlid": 61, 00:12:08.396 "qid": 0, 00:12:08.396 "state": "enabled", 00:12:08.396 "thread": "nvmf_tgt_poll_group_000", 00:12:08.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:08.396 "listen_address": { 00:12:08.396 "trtype": "TCP", 00:12:08.396 "adrfam": "IPv4", 00:12:08.396 "traddr": "10.0.0.3", 00:12:08.396 "trsvcid": "4420" 00:12:08.396 }, 00:12:08.396 "peer_address": { 00:12:08.396 "trtype": "TCP", 00:12:08.396 "adrfam": "IPv4", 00:12:08.396 "traddr": "10.0.0.1", 00:12:08.396 "trsvcid": "35608" 00:12:08.396 }, 00:12:08.396 "auth": { 00:12:08.396 "state": "completed", 00:12:08.396 "digest": "sha384", 00:12:08.396 "dhgroup": "ffdhe2048" 00:12:08.396 } 00:12:08.396 } 00:12:08.396 ]' 00:12:08.396 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:08.396 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:08.396 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:08.655 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:08.655 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:08.655 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.655 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.655 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.914 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:12:08.914 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:12:09.482 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.482 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:09.482 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.482 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.482 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.482 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:09.482 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:09.482 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:09.741 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:12:09.741 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:09.741 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:09.741 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:09.741 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:09.741 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.741 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key3 00:12:09.741 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.741 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.741 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.741 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:09.741 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:09.742 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:10.000 00:12:10.000 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:10.000 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.000 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:10.259 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.259 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.259 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.259 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.259 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.259 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.259 { 00:12:10.259 "cntlid": 63, 00:12:10.259 "qid": 0, 00:12:10.259 "state": "enabled", 00:12:10.259 "thread": "nvmf_tgt_poll_group_000", 00:12:10.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:10.259 "listen_address": { 00:12:10.259 "trtype": "TCP", 00:12:10.259 "adrfam": "IPv4", 00:12:10.259 "traddr": "10.0.0.3", 00:12:10.259 "trsvcid": "4420" 00:12:10.259 }, 00:12:10.259 "peer_address": { 00:12:10.259 "trtype": "TCP", 00:12:10.259 "adrfam": "IPv4", 00:12:10.259 "traddr": "10.0.0.1", 00:12:10.259 "trsvcid": "35638" 00:12:10.259 }, 00:12:10.259 "auth": { 00:12:10.259 "state": "completed", 00:12:10.259 "digest": "sha384", 00:12:10.259 "dhgroup": "ffdhe2048" 00:12:10.259 } 00:12:10.259 } 00:12:10.259 ]' 00:12:10.259 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.518 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:10.518 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.518 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:10.518 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.518 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.518 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.518 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.776 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:12:10.776 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:12:11.343 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.343 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:11.343 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.343 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.343 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.343 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:11.343 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:11.343 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:11.343 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:11.654 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:12:11.654 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:11.654 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:11.654 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:11.654 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:11.654 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.654 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.654 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.654 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.654 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.654 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.654 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.654 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.912 00:12:11.912 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:11.912 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.912 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:12.170 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.170 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.170 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.170 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.170 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.170 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:12.170 { 00:12:12.170 "cntlid": 65, 00:12:12.170 "qid": 0, 00:12:12.170 "state": "enabled", 00:12:12.170 "thread": "nvmf_tgt_poll_group_000", 00:12:12.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:12.170 "listen_address": { 00:12:12.170 "trtype": "TCP", 00:12:12.170 "adrfam": "IPv4", 00:12:12.170 "traddr": "10.0.0.3", 00:12:12.170 "trsvcid": "4420" 00:12:12.170 }, 00:12:12.170 "peer_address": { 00:12:12.170 "trtype": "TCP", 00:12:12.170 "adrfam": "IPv4", 00:12:12.170 "traddr": "10.0.0.1", 00:12:12.170 "trsvcid": "35674" 00:12:12.170 }, 00:12:12.170 "auth": { 00:12:12.170 "state": "completed", 00:12:12.170 "digest": "sha384", 00:12:12.170 "dhgroup": "ffdhe3072" 00:12:12.170 } 00:12:12.170 } 00:12:12.170 ]' 00:12:12.170 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:12.427 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:12.427 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:12.427 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:12.427 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.427 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.427 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.427 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.686 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:12:12.686 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:12:13.253 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.253 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:13.253 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.253 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.253 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.253 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:13.253 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:13.253 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:13.821 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:12:13.821 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:13.821 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:13.821 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:13.821 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:13.821 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.821 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.821 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.821 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.821 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.821 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.821 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.821 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.081 00:12:14.081 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.081 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.081 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.340 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.340 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.340 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.340 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.340 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.340 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:14.340 { 00:12:14.340 "cntlid": 67, 00:12:14.340 "qid": 0, 00:12:14.340 "state": "enabled", 00:12:14.340 "thread": "nvmf_tgt_poll_group_000", 00:12:14.340 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:14.340 "listen_address": { 00:12:14.340 "trtype": "TCP", 00:12:14.340 "adrfam": "IPv4", 00:12:14.340 "traddr": "10.0.0.3", 00:12:14.340 "trsvcid": "4420" 00:12:14.340 }, 00:12:14.340 "peer_address": { 00:12:14.340 "trtype": "TCP", 00:12:14.340 "adrfam": "IPv4", 00:12:14.340 "traddr": "10.0.0.1", 00:12:14.340 "trsvcid": "35704" 00:12:14.340 }, 00:12:14.340 "auth": { 00:12:14.340 "state": "completed", 00:12:14.340 "digest": "sha384", 00:12:14.340 "dhgroup": "ffdhe3072" 00:12:14.340 } 00:12:14.340 } 00:12:14.340 ]' 00:12:14.340 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:14.340 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:14.340 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.340 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:14.340 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.340 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.340 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.340 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.600 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:12:14.600 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:12:15.169 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.169 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:15.169 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.169 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.169 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.169 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:15.169 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:15.169 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:15.738 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:12:15.738 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:15.738 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:15.738 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:15.738 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:15.738 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.738 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.738 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.738 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.738 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.738 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.738 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.738 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.997 00:12:15.997 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:15.997 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:15.997 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.257 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.257 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.257 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.257 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.257 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.257 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:16.257 { 00:12:16.257 "cntlid": 69, 00:12:16.257 "qid": 0, 00:12:16.257 "state": "enabled", 00:12:16.257 "thread": "nvmf_tgt_poll_group_000", 00:12:16.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:16.257 "listen_address": { 00:12:16.257 "trtype": "TCP", 00:12:16.257 "adrfam": "IPv4", 00:12:16.257 "traddr": "10.0.0.3", 00:12:16.257 "trsvcid": "4420" 00:12:16.257 }, 00:12:16.257 "peer_address": { 00:12:16.257 "trtype": "TCP", 00:12:16.257 "adrfam": "IPv4", 00:12:16.257 "traddr": "10.0.0.1", 00:12:16.257 "trsvcid": "45932" 00:12:16.257 }, 00:12:16.257 "auth": { 00:12:16.257 "state": "completed", 00:12:16.257 "digest": "sha384", 00:12:16.257 "dhgroup": "ffdhe3072" 00:12:16.257 } 00:12:16.257 } 00:12:16.257 ]' 00:12:16.257 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:16.257 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:16.257 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:16.257 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:16.257 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:16.257 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.257 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.257 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.826 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:12:16.826 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:12:17.086 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.086 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:17.086 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.086 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.086 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.086 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:17.086 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:17.086 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:17.659 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:12:17.659 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:17.659 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:17.659 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:17.659 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:17.659 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.659 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key3 00:12:17.659 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.659 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.659 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.659 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:17.659 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:17.659 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:17.918 00:12:17.918 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:17.918 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:17.918 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.177 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.177 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.177 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.177 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.177 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.177 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:18.177 { 00:12:18.177 "cntlid": 71, 00:12:18.177 "qid": 0, 00:12:18.177 "state": "enabled", 00:12:18.177 "thread": "nvmf_tgt_poll_group_000", 00:12:18.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:18.177 "listen_address": { 00:12:18.177 "trtype": "TCP", 00:12:18.177 "adrfam": "IPv4", 00:12:18.178 "traddr": "10.0.0.3", 00:12:18.178 "trsvcid": "4420" 00:12:18.178 }, 00:12:18.178 "peer_address": { 00:12:18.178 "trtype": "TCP", 00:12:18.178 "adrfam": "IPv4", 00:12:18.178 "traddr": "10.0.0.1", 00:12:18.178 "trsvcid": "45938" 00:12:18.178 }, 00:12:18.178 "auth": { 00:12:18.178 "state": "completed", 00:12:18.178 "digest": "sha384", 00:12:18.178 "dhgroup": "ffdhe3072" 00:12:18.178 } 00:12:18.178 } 00:12:18.178 ]' 00:12:18.178 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:18.178 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:18.178 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:18.437 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:18.437 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:18.437 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.437 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.437 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.696 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:12:18.696 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:12:19.264 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.264 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:19.264 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.264 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.264 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.264 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:19.264 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:19.264 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:19.264 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:19.524 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:12:19.524 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:19.524 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:19.524 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:19.524 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:19.524 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.524 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.524 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.524 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.524 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.524 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.524 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.524 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.783 00:12:19.783 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:19.783 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:19.783 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.042 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.042 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.042 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.042 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.042 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.042 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:20.042 { 00:12:20.042 "cntlid": 73, 00:12:20.042 "qid": 0, 00:12:20.042 "state": "enabled", 00:12:20.042 "thread": "nvmf_tgt_poll_group_000", 00:12:20.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:20.042 "listen_address": { 00:12:20.042 "trtype": "TCP", 00:12:20.042 "adrfam": "IPv4", 00:12:20.042 "traddr": "10.0.0.3", 00:12:20.042 "trsvcid": "4420" 00:12:20.042 }, 00:12:20.042 "peer_address": { 00:12:20.042 "trtype": "TCP", 00:12:20.042 "adrfam": "IPv4", 00:12:20.042 "traddr": "10.0.0.1", 00:12:20.042 "trsvcid": "45970" 00:12:20.042 }, 00:12:20.042 "auth": { 00:12:20.042 "state": "completed", 00:12:20.042 "digest": "sha384", 00:12:20.042 "dhgroup": "ffdhe4096" 00:12:20.042 } 00:12:20.042 } 00:12:20.042 ]' 00:12:20.042 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:20.042 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:20.042 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:20.302 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:20.302 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:20.302 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.302 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.302 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.564 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:12:20.564 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:12:21.205 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.205 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:21.205 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.205 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.205 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.205 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:21.205 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:21.205 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:21.465 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:12:21.465 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:21.465 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:21.465 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:21.465 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:21.465 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.465 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.465 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.465 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.465 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.465 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.465 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.465 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.033 00:12:22.033 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:22.033 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.033 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:22.292 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.292 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.292 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.292 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.292 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.292 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:22.292 { 00:12:22.292 "cntlid": 75, 00:12:22.292 "qid": 0, 00:12:22.292 "state": "enabled", 00:12:22.292 "thread": "nvmf_tgt_poll_group_000", 00:12:22.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:22.292 "listen_address": { 00:12:22.292 "trtype": "TCP", 00:12:22.292 "adrfam": "IPv4", 00:12:22.292 "traddr": "10.0.0.3", 00:12:22.292 "trsvcid": "4420" 00:12:22.292 }, 00:12:22.292 "peer_address": { 00:12:22.292 "trtype": "TCP", 00:12:22.292 "adrfam": "IPv4", 00:12:22.292 "traddr": "10.0.0.1", 00:12:22.292 "trsvcid": "45992" 00:12:22.292 }, 00:12:22.292 "auth": { 00:12:22.292 "state": "completed", 00:12:22.292 "digest": "sha384", 00:12:22.292 "dhgroup": "ffdhe4096" 00:12:22.292 } 00:12:22.292 } 00:12:22.292 ]' 00:12:22.292 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:22.293 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:22.293 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:22.293 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:22.293 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:22.552 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.552 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.552 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.810 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:12:22.810 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:12:23.377 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.377 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:23.377 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.377 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.377 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.377 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:23.377 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:23.377 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:23.635 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:12:23.635 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:23.635 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:23.635 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:23.635 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:23.635 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.635 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:23.635 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.635 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.635 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.635 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:23.635 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:23.635 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:23.893 00:12:24.151 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:24.151 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:24.151 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.410 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.410 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.410 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.410 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.410 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.410 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:24.410 { 00:12:24.410 "cntlid": 77, 00:12:24.410 "qid": 0, 00:12:24.410 "state": "enabled", 00:12:24.410 "thread": "nvmf_tgt_poll_group_000", 00:12:24.410 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:24.410 "listen_address": { 00:12:24.410 "trtype": "TCP", 00:12:24.410 "adrfam": "IPv4", 00:12:24.410 "traddr": "10.0.0.3", 00:12:24.410 "trsvcid": "4420" 00:12:24.411 }, 00:12:24.411 "peer_address": { 00:12:24.411 "trtype": "TCP", 00:12:24.411 "adrfam": "IPv4", 00:12:24.411 "traddr": "10.0.0.1", 00:12:24.411 "trsvcid": "46020" 00:12:24.411 }, 00:12:24.411 "auth": { 00:12:24.411 "state": "completed", 00:12:24.411 "digest": "sha384", 00:12:24.411 "dhgroup": "ffdhe4096" 00:12:24.411 } 00:12:24.411 } 00:12:24.411 ]' 00:12:24.411 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:24.411 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:24.411 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:24.411 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:24.411 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:24.411 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.411 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.411 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.670 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:12:24.670 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:12:25.610 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.610 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:25.610 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.610 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.610 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.610 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:25.610 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:25.610 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:25.610 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:12:25.610 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:25.610 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:25.610 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:25.610 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:25.610 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.610 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key3 00:12:25.610 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.610 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.610 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.610 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:25.610 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:25.610 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:26.180 00:12:26.180 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:26.180 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.180 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:26.440 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.440 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.440 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.440 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.440 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.440 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:26.440 { 00:12:26.440 "cntlid": 79, 00:12:26.440 "qid": 0, 00:12:26.440 "state": "enabled", 00:12:26.440 "thread": "nvmf_tgt_poll_group_000", 00:12:26.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:26.440 "listen_address": { 00:12:26.440 "trtype": "TCP", 00:12:26.440 "adrfam": "IPv4", 00:12:26.440 "traddr": "10.0.0.3", 00:12:26.440 "trsvcid": "4420" 00:12:26.440 }, 00:12:26.440 "peer_address": { 00:12:26.440 "trtype": "TCP", 00:12:26.440 "adrfam": "IPv4", 00:12:26.440 "traddr": "10.0.0.1", 00:12:26.440 "trsvcid": "46916" 00:12:26.440 }, 00:12:26.440 "auth": { 00:12:26.440 "state": "completed", 00:12:26.440 "digest": "sha384", 00:12:26.440 "dhgroup": "ffdhe4096" 00:12:26.440 } 00:12:26.440 } 00:12:26.440 ]' 00:12:26.440 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:26.440 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:26.440 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:26.440 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:26.440 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:26.440 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.440 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.440 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.701 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:12:26.702 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:12:27.279 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.279 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:27.279 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.279 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.279 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.279 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:27.279 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:27.279 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:27.279 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:27.537 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:12:27.538 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:27.538 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:27.538 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:27.538 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:27.538 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.538 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.538 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.538 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.538 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.538 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.538 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.538 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.104 00:12:28.104 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:28.104 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:28.104 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.362 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.362 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.362 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.362 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.362 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.362 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:28.362 { 00:12:28.362 "cntlid": 81, 00:12:28.362 "qid": 0, 00:12:28.362 "state": "enabled", 00:12:28.362 "thread": "nvmf_tgt_poll_group_000", 00:12:28.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:28.362 "listen_address": { 00:12:28.362 "trtype": "TCP", 00:12:28.362 "adrfam": "IPv4", 00:12:28.362 "traddr": "10.0.0.3", 00:12:28.362 "trsvcid": "4420" 00:12:28.362 }, 00:12:28.362 "peer_address": { 00:12:28.362 "trtype": "TCP", 00:12:28.362 "adrfam": "IPv4", 00:12:28.362 "traddr": "10.0.0.1", 00:12:28.362 "trsvcid": "46950" 00:12:28.362 }, 00:12:28.362 "auth": { 00:12:28.362 "state": "completed", 00:12:28.362 "digest": "sha384", 00:12:28.362 "dhgroup": "ffdhe6144" 00:12:28.362 } 00:12:28.362 } 00:12:28.362 ]' 00:12:28.362 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:28.362 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:28.362 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:28.362 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:28.362 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:28.362 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.362 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.362 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.620 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:12:28.620 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:12:29.555 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.555 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:29.555 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.555 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.555 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.555 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:29.555 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:29.555 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:29.555 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:12:29.555 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:29.555 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:29.555 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:29.555 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:29.555 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.555 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.555 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.555 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.555 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.555 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.555 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.555 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.122 00:12:30.122 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:30.122 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:30.122 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.381 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.381 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.381 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.381 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.381 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.381 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:30.381 { 00:12:30.381 "cntlid": 83, 00:12:30.381 "qid": 0, 00:12:30.381 "state": "enabled", 00:12:30.381 "thread": "nvmf_tgt_poll_group_000", 00:12:30.381 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:30.381 "listen_address": { 00:12:30.381 "trtype": "TCP", 00:12:30.381 "adrfam": "IPv4", 00:12:30.381 "traddr": "10.0.0.3", 00:12:30.381 "trsvcid": "4420" 00:12:30.381 }, 00:12:30.381 "peer_address": { 00:12:30.381 "trtype": "TCP", 00:12:30.381 "adrfam": "IPv4", 00:12:30.381 "traddr": "10.0.0.1", 00:12:30.381 "trsvcid": "46988" 00:12:30.381 }, 00:12:30.381 "auth": { 00:12:30.381 "state": "completed", 00:12:30.381 "digest": "sha384", 00:12:30.381 "dhgroup": "ffdhe6144" 00:12:30.381 } 00:12:30.381 } 00:12:30.381 ]' 00:12:30.381 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:30.381 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:30.381 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:30.381 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:30.381 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:30.639 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.639 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.639 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.897 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:12:30.897 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:12:31.464 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.464 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:31.464 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.464 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.464 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.464 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:31.464 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:31.464 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:31.724 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:12:31.724 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:31.724 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:31.724 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:31.724 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:31.724 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.724 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.724 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.724 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.724 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.724 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.724 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.724 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:32.293 00:12:32.293 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:32.293 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:32.293 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.552 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.552 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.552 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.552 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.552 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.552 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:32.552 { 00:12:32.552 "cntlid": 85, 00:12:32.552 "qid": 0, 00:12:32.552 "state": "enabled", 00:12:32.552 "thread": "nvmf_tgt_poll_group_000", 00:12:32.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:32.552 "listen_address": { 00:12:32.552 "trtype": "TCP", 00:12:32.552 "adrfam": "IPv4", 00:12:32.552 "traddr": "10.0.0.3", 00:12:32.552 "trsvcid": "4420" 00:12:32.552 }, 00:12:32.552 "peer_address": { 00:12:32.552 "trtype": "TCP", 00:12:32.552 "adrfam": "IPv4", 00:12:32.553 "traddr": "10.0.0.1", 00:12:32.553 "trsvcid": "47012" 00:12:32.553 }, 00:12:32.553 "auth": { 00:12:32.553 "state": "completed", 00:12:32.553 "digest": "sha384", 00:12:32.553 "dhgroup": "ffdhe6144" 00:12:32.553 } 00:12:32.553 } 00:12:32.553 ]' 00:12:32.553 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:32.812 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:32.812 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:32.812 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:32.812 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:32.812 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.812 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.812 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.072 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:12:33.072 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:12:33.644 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.644 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:33.644 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.644 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.644 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.644 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:33.644 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:33.644 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:33.955 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:12:33.955 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:33.955 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:33.955 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:33.955 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:33.955 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.955 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key3 00:12:33.955 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.955 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.955 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.955 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:33.955 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:33.955 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:34.523 00:12:34.523 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:34.523 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:34.523 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.523 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.523 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.523 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.523 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.523 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.523 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:34.523 { 00:12:34.523 "cntlid": 87, 00:12:34.523 "qid": 0, 00:12:34.523 "state": "enabled", 00:12:34.523 "thread": "nvmf_tgt_poll_group_000", 00:12:34.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:34.523 "listen_address": { 00:12:34.523 "trtype": "TCP", 00:12:34.523 "adrfam": "IPv4", 00:12:34.523 "traddr": "10.0.0.3", 00:12:34.523 "trsvcid": "4420" 00:12:34.523 }, 00:12:34.523 "peer_address": { 00:12:34.523 "trtype": "TCP", 00:12:34.523 "adrfam": "IPv4", 00:12:34.523 "traddr": "10.0.0.1", 00:12:34.523 "trsvcid": "47054" 00:12:34.523 }, 00:12:34.524 "auth": { 00:12:34.524 "state": "completed", 00:12:34.524 "digest": "sha384", 00:12:34.524 "dhgroup": "ffdhe6144" 00:12:34.524 } 00:12:34.524 } 00:12:34.524 ]' 00:12:34.524 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:34.782 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:34.782 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:34.782 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:34.782 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:34.782 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.782 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.782 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.041 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:12:35.041 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:12:35.608 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.608 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:35.608 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.608 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.608 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.608 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:35.608 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:35.608 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:35.608 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:36.176 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:12:36.176 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:36.176 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:36.177 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:36.177 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:36.177 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.177 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.177 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.177 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.177 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.177 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.177 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.177 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.745 00:12:36.745 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:36.745 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:36.745 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.745 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.745 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.745 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.745 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.745 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.745 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:36.745 { 00:12:36.745 "cntlid": 89, 00:12:36.745 "qid": 0, 00:12:36.745 "state": "enabled", 00:12:36.745 "thread": "nvmf_tgt_poll_group_000", 00:12:36.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:36.745 "listen_address": { 00:12:36.745 "trtype": "TCP", 00:12:36.745 "adrfam": "IPv4", 00:12:36.745 "traddr": "10.0.0.3", 00:12:36.745 "trsvcid": "4420" 00:12:36.745 }, 00:12:36.745 "peer_address": { 00:12:36.745 "trtype": "TCP", 00:12:36.745 "adrfam": "IPv4", 00:12:36.745 "traddr": "10.0.0.1", 00:12:36.745 "trsvcid": "33602" 00:12:36.745 }, 00:12:36.745 "auth": { 00:12:36.745 "state": "completed", 00:12:36.745 "digest": "sha384", 00:12:36.745 "dhgroup": "ffdhe8192" 00:12:36.745 } 00:12:36.745 } 00:12:36.745 ]' 00:12:36.745 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:36.745 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:36.745 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:37.005 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:37.005 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:37.005 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.005 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.005 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.264 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:12:37.264 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:12:37.832 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.832 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:37.832 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.832 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.832 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.832 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:37.832 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:37.832 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:38.092 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:12:38.092 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:38.092 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:38.092 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:38.092 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:38.092 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.092 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.092 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.092 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.092 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.092 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.092 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.092 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.661 00:12:38.661 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:38.661 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.661 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:38.920 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.920 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.920 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.920 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.920 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.920 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:38.920 { 00:12:38.920 "cntlid": 91, 00:12:38.920 "qid": 0, 00:12:38.920 "state": "enabled", 00:12:38.920 "thread": "nvmf_tgt_poll_group_000", 00:12:38.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:38.920 "listen_address": { 00:12:38.920 "trtype": "TCP", 00:12:38.920 "adrfam": "IPv4", 00:12:38.920 "traddr": "10.0.0.3", 00:12:38.920 "trsvcid": "4420" 00:12:38.920 }, 00:12:38.920 "peer_address": { 00:12:38.920 "trtype": "TCP", 00:12:38.920 "adrfam": "IPv4", 00:12:38.920 "traddr": "10.0.0.1", 00:12:38.920 "trsvcid": "33640" 00:12:38.920 }, 00:12:38.920 "auth": { 00:12:38.920 "state": "completed", 00:12:38.920 "digest": "sha384", 00:12:38.920 "dhgroup": "ffdhe8192" 00:12:38.920 } 00:12:38.920 } 00:12:38.920 ]' 00:12:38.920 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:39.179 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:39.179 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:39.179 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:39.179 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:39.179 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.179 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.179 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.438 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:12:39.438 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:12:40.006 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.006 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:40.006 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.006 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.006 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.006 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:40.006 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:40.006 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:40.265 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:12:40.265 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:40.265 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:40.265 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:40.265 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:40.265 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.265 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.265 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.265 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.265 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.265 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.265 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.265 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.201 00:12:41.201 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:41.201 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:41.201 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.201 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.201 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.201 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.201 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.202 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.202 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:41.202 { 00:12:41.202 "cntlid": 93, 00:12:41.202 "qid": 0, 00:12:41.202 "state": "enabled", 00:12:41.202 "thread": "nvmf_tgt_poll_group_000", 00:12:41.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:41.202 "listen_address": { 00:12:41.202 "trtype": "TCP", 00:12:41.202 "adrfam": "IPv4", 00:12:41.202 "traddr": "10.0.0.3", 00:12:41.202 "trsvcid": "4420" 00:12:41.202 }, 00:12:41.202 "peer_address": { 00:12:41.202 "trtype": "TCP", 00:12:41.202 "adrfam": "IPv4", 00:12:41.202 "traddr": "10.0.0.1", 00:12:41.202 "trsvcid": "33680" 00:12:41.202 }, 00:12:41.202 "auth": { 00:12:41.202 "state": "completed", 00:12:41.202 "digest": "sha384", 00:12:41.202 "dhgroup": "ffdhe8192" 00:12:41.202 } 00:12:41.202 } 00:12:41.202 ]' 00:12:41.202 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:41.202 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:41.202 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:41.202 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:41.202 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:41.460 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.460 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.460 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.719 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:12:41.719 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:12:42.285 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.285 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:42.285 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.285 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.285 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.285 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:42.285 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:42.285 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:42.543 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:12:42.543 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:42.543 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:42.543 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:42.543 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:42.543 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.543 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key3 00:12:42.543 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.543 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.543 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.543 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:42.543 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:42.543 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:43.479 00:12:43.479 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:43.479 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:43.479 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.479 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.479 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.479 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.479 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.479 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.479 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:43.479 { 00:12:43.479 "cntlid": 95, 00:12:43.479 "qid": 0, 00:12:43.479 "state": "enabled", 00:12:43.479 "thread": "nvmf_tgt_poll_group_000", 00:12:43.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:43.479 "listen_address": { 00:12:43.479 "trtype": "TCP", 00:12:43.479 "adrfam": "IPv4", 00:12:43.479 "traddr": "10.0.0.3", 00:12:43.479 "trsvcid": "4420" 00:12:43.479 }, 00:12:43.479 "peer_address": { 00:12:43.479 "trtype": "TCP", 00:12:43.479 "adrfam": "IPv4", 00:12:43.479 "traddr": "10.0.0.1", 00:12:43.479 "trsvcid": "33704" 00:12:43.479 }, 00:12:43.479 "auth": { 00:12:43.479 "state": "completed", 00:12:43.479 "digest": "sha384", 00:12:43.479 "dhgroup": "ffdhe8192" 00:12:43.479 } 00:12:43.479 } 00:12:43.479 ]' 00:12:43.479 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:43.738 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:43.738 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:43.739 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:43.739 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:43.739 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.739 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.739 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.997 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:12:43.997 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:12:44.563 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.563 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:44.563 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.563 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.563 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.563 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:44.563 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:44.563 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:44.563 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:44.563 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:44.821 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:12:44.821 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:44.821 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:44.821 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:44.821 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:44.821 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.821 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.821 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.821 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.821 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.821 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.821 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.822 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.389 00:12:45.389 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:45.389 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:45.389 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.647 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.647 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.647 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.647 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.647 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.647 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:45.647 { 00:12:45.647 "cntlid": 97, 00:12:45.647 "qid": 0, 00:12:45.647 "state": "enabled", 00:12:45.647 "thread": "nvmf_tgt_poll_group_000", 00:12:45.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:45.647 "listen_address": { 00:12:45.647 "trtype": "TCP", 00:12:45.647 "adrfam": "IPv4", 00:12:45.647 "traddr": "10.0.0.3", 00:12:45.647 "trsvcid": "4420" 00:12:45.647 }, 00:12:45.647 "peer_address": { 00:12:45.647 "trtype": "TCP", 00:12:45.647 "adrfam": "IPv4", 00:12:45.647 "traddr": "10.0.0.1", 00:12:45.647 "trsvcid": "33154" 00:12:45.647 }, 00:12:45.647 "auth": { 00:12:45.647 "state": "completed", 00:12:45.647 "digest": "sha512", 00:12:45.647 "dhgroup": "null" 00:12:45.647 } 00:12:45.647 } 00:12:45.647 ]' 00:12:45.647 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:45.647 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:45.647 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:45.647 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:45.647 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:45.647 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.647 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.647 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.905 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:12:45.905 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:12:46.843 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.843 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:46.843 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.843 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.843 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.843 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:46.843 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:46.843 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:46.843 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:12:46.843 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:46.843 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:46.843 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:46.843 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:46.843 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.843 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.843 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.843 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.843 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.843 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.843 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.843 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.153 00:12:47.153 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:47.153 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.153 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:47.426 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.426 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.426 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.426 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.426 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.426 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:47.426 { 00:12:47.426 "cntlid": 99, 00:12:47.426 "qid": 0, 00:12:47.426 "state": "enabled", 00:12:47.426 "thread": "nvmf_tgt_poll_group_000", 00:12:47.426 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:47.426 "listen_address": { 00:12:47.426 "trtype": "TCP", 00:12:47.426 "adrfam": "IPv4", 00:12:47.426 "traddr": "10.0.0.3", 00:12:47.426 "trsvcid": "4420" 00:12:47.426 }, 00:12:47.426 "peer_address": { 00:12:47.426 "trtype": "TCP", 00:12:47.426 "adrfam": "IPv4", 00:12:47.426 "traddr": "10.0.0.1", 00:12:47.426 "trsvcid": "33188" 00:12:47.426 }, 00:12:47.426 "auth": { 00:12:47.426 "state": "completed", 00:12:47.426 "digest": "sha512", 00:12:47.426 "dhgroup": "null" 00:12:47.426 } 00:12:47.426 } 00:12:47.426 ]' 00:12:47.426 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:47.426 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:47.426 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:47.683 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:47.683 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:47.683 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.683 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.683 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.942 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:12:47.942 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:12:48.509 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.510 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:48.510 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.510 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.510 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.510 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:48.510 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:48.510 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:48.769 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:12:48.769 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:48.769 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:48.769 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:48.769 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:48.769 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.769 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.769 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.769 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.769 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.769 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.769 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.769 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.029 00:12:49.029 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:49.029 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:49.029 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.289 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.289 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.289 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.289 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.289 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.289 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:49.289 { 00:12:49.289 "cntlid": 101, 00:12:49.289 "qid": 0, 00:12:49.289 "state": "enabled", 00:12:49.289 "thread": "nvmf_tgt_poll_group_000", 00:12:49.289 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:49.289 "listen_address": { 00:12:49.289 "trtype": "TCP", 00:12:49.289 "adrfam": "IPv4", 00:12:49.289 "traddr": "10.0.0.3", 00:12:49.289 "trsvcid": "4420" 00:12:49.289 }, 00:12:49.289 "peer_address": { 00:12:49.289 "trtype": "TCP", 00:12:49.289 "adrfam": "IPv4", 00:12:49.289 "traddr": "10.0.0.1", 00:12:49.289 "trsvcid": "33230" 00:12:49.289 }, 00:12:49.289 "auth": { 00:12:49.289 "state": "completed", 00:12:49.289 "digest": "sha512", 00:12:49.289 "dhgroup": "null" 00:12:49.289 } 00:12:49.289 } 00:12:49.289 ]' 00:12:49.289 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:49.289 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:49.289 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:49.549 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:49.549 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:49.549 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.549 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.549 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.808 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:12:49.808 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:12:50.377 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.377 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:50.377 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.377 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.377 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.377 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:50.377 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:50.377 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:50.636 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:12:50.636 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:50.636 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:50.636 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:50.636 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:50.636 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.636 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key3 00:12:50.636 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.636 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.636 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.636 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:50.636 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:50.636 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:50.896 00:12:50.896 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:50.896 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:50.896 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.155 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.155 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.155 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.155 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.155 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.155 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:51.155 { 00:12:51.155 "cntlid": 103, 00:12:51.155 "qid": 0, 00:12:51.155 "state": "enabled", 00:12:51.155 "thread": "nvmf_tgt_poll_group_000", 00:12:51.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:51.155 "listen_address": { 00:12:51.155 "trtype": "TCP", 00:12:51.155 "adrfam": "IPv4", 00:12:51.155 "traddr": "10.0.0.3", 00:12:51.155 "trsvcid": "4420" 00:12:51.155 }, 00:12:51.155 "peer_address": { 00:12:51.155 "trtype": "TCP", 00:12:51.155 "adrfam": "IPv4", 00:12:51.155 "traddr": "10.0.0.1", 00:12:51.155 "trsvcid": "33252" 00:12:51.155 }, 00:12:51.155 "auth": { 00:12:51.155 "state": "completed", 00:12:51.155 "digest": "sha512", 00:12:51.155 "dhgroup": "null" 00:12:51.155 } 00:12:51.155 } 00:12:51.155 ]' 00:12:51.155 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:51.155 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:51.415 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.415 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:51.415 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.415 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.415 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.415 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.674 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:12:51.674 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:12:52.244 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.244 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:52.244 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.244 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.244 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.244 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:52.244 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:52.244 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:52.244 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:52.503 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:12:52.503 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:52.503 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:52.503 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:52.503 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:52.503 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.503 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.503 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.503 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.763 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.763 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.763 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.764 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.023 00:12:53.023 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:53.023 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.023 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:53.282 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.282 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.282 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.283 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.283 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.283 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:53.283 { 00:12:53.283 "cntlid": 105, 00:12:53.283 "qid": 0, 00:12:53.283 "state": "enabled", 00:12:53.283 "thread": "nvmf_tgt_poll_group_000", 00:12:53.283 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:53.283 "listen_address": { 00:12:53.283 "trtype": "TCP", 00:12:53.283 "adrfam": "IPv4", 00:12:53.283 "traddr": "10.0.0.3", 00:12:53.283 "trsvcid": "4420" 00:12:53.283 }, 00:12:53.283 "peer_address": { 00:12:53.283 "trtype": "TCP", 00:12:53.283 "adrfam": "IPv4", 00:12:53.283 "traddr": "10.0.0.1", 00:12:53.283 "trsvcid": "33290" 00:12:53.283 }, 00:12:53.283 "auth": { 00:12:53.283 "state": "completed", 00:12:53.283 "digest": "sha512", 00:12:53.283 "dhgroup": "ffdhe2048" 00:12:53.283 } 00:12:53.283 } 00:12:53.283 ]' 00:12:53.283 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:53.283 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:53.283 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:53.283 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:53.283 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:53.542 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.542 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.542 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.542 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:12:53.542 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:12:54.481 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.482 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:54.482 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.482 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.482 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.482 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:54.482 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:54.482 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:54.482 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:12:54.482 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:54.482 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:54.482 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:54.482 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:54.482 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.482 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.482 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.482 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.482 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.482 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.482 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.482 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.051 00:12:55.051 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:55.051 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:55.051 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.311 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.311 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.311 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.311 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.311 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.311 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:55.311 { 00:12:55.311 "cntlid": 107, 00:12:55.311 "qid": 0, 00:12:55.311 "state": "enabled", 00:12:55.311 "thread": "nvmf_tgt_poll_group_000", 00:12:55.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:55.311 "listen_address": { 00:12:55.311 "trtype": "TCP", 00:12:55.311 "adrfam": "IPv4", 00:12:55.311 "traddr": "10.0.0.3", 00:12:55.311 "trsvcid": "4420" 00:12:55.311 }, 00:12:55.311 "peer_address": { 00:12:55.311 "trtype": "TCP", 00:12:55.311 "adrfam": "IPv4", 00:12:55.311 "traddr": "10.0.0.1", 00:12:55.311 "trsvcid": "33316" 00:12:55.311 }, 00:12:55.311 "auth": { 00:12:55.311 "state": "completed", 00:12:55.311 "digest": "sha512", 00:12:55.311 "dhgroup": "ffdhe2048" 00:12:55.311 } 00:12:55.311 } 00:12:55.311 ]' 00:12:55.311 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:55.311 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:55.311 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:55.311 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:55.311 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:55.311 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.311 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.311 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.570 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:12:55.570 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:12:56.140 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.140 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:56.140 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.140 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.140 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.140 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:56.140 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:56.140 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:56.400 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:12:56.400 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:56.400 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:56.400 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:56.400 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:56.400 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.400 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.400 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.400 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.400 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.400 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.400 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.400 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.969 00:12:56.969 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:56.969 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:56.969 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.228 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.228 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.228 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.228 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.228 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.228 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:57.228 { 00:12:57.228 "cntlid": 109, 00:12:57.228 "qid": 0, 00:12:57.228 "state": "enabled", 00:12:57.228 "thread": "nvmf_tgt_poll_group_000", 00:12:57.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:57.228 "listen_address": { 00:12:57.228 "trtype": "TCP", 00:12:57.228 "adrfam": "IPv4", 00:12:57.228 "traddr": "10.0.0.3", 00:12:57.228 "trsvcid": "4420" 00:12:57.228 }, 00:12:57.228 "peer_address": { 00:12:57.228 "trtype": "TCP", 00:12:57.228 "adrfam": "IPv4", 00:12:57.229 "traddr": "10.0.0.1", 00:12:57.229 "trsvcid": "37852" 00:12:57.229 }, 00:12:57.229 "auth": { 00:12:57.229 "state": "completed", 00:12:57.229 "digest": "sha512", 00:12:57.229 "dhgroup": "ffdhe2048" 00:12:57.229 } 00:12:57.229 } 00:12:57.229 ]' 00:12:57.229 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:57.229 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:57.229 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:57.229 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:57.229 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:57.229 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.229 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.229 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.488 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:12:57.488 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:12:58.055 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.055 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:58.055 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.055 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.055 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.055 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:58.055 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:58.055 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:58.314 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:12:58.314 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:58.314 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:58.314 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:58.314 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:58.314 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.314 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key3 00:12:58.314 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.314 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.314 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.314 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:58.314 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:58.314 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:58.572 00:12:58.572 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:58.572 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:58.572 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.831 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.831 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.831 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.831 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.831 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.831 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:58.831 { 00:12:58.831 "cntlid": 111, 00:12:58.831 "qid": 0, 00:12:58.831 "state": "enabled", 00:12:58.831 "thread": "nvmf_tgt_poll_group_000", 00:12:58.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:12:58.831 "listen_address": { 00:12:58.831 "trtype": "TCP", 00:12:58.831 "adrfam": "IPv4", 00:12:58.831 "traddr": "10.0.0.3", 00:12:58.831 "trsvcid": "4420" 00:12:58.831 }, 00:12:58.831 "peer_address": { 00:12:58.831 "trtype": "TCP", 00:12:58.831 "adrfam": "IPv4", 00:12:58.831 "traddr": "10.0.0.1", 00:12:58.831 "trsvcid": "37880" 00:12:58.831 }, 00:12:58.831 "auth": { 00:12:58.831 "state": "completed", 00:12:58.831 "digest": "sha512", 00:12:58.831 "dhgroup": "ffdhe2048" 00:12:58.831 } 00:12:58.831 } 00:12:58.831 ]' 00:12:58.831 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:58.831 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:59.091 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:59.091 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:59.091 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:59.091 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.091 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.091 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.350 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:12:59.350 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:12:59.955 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.955 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:12:59.955 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.955 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.955 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.955 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:59.955 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:59.955 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:59.955 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:00.255 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:13:00.255 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:00.255 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:00.255 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:00.255 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:00.255 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.255 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.255 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.255 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.255 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.255 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.255 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.255 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.515 00:13:00.515 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:00.515 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:00.515 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.774 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.774 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.774 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.774 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.774 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.774 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:00.774 { 00:13:00.774 "cntlid": 113, 00:13:00.774 "qid": 0, 00:13:00.774 "state": "enabled", 00:13:00.774 "thread": "nvmf_tgt_poll_group_000", 00:13:00.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:13:00.774 "listen_address": { 00:13:00.774 "trtype": "TCP", 00:13:00.774 "adrfam": "IPv4", 00:13:00.774 "traddr": "10.0.0.3", 00:13:00.774 "trsvcid": "4420" 00:13:00.774 }, 00:13:00.774 "peer_address": { 00:13:00.774 "trtype": "TCP", 00:13:00.774 "adrfam": "IPv4", 00:13:00.774 "traddr": "10.0.0.1", 00:13:00.774 "trsvcid": "37894" 00:13:00.774 }, 00:13:00.774 "auth": { 00:13:00.774 "state": "completed", 00:13:00.774 "digest": "sha512", 00:13:00.774 "dhgroup": "ffdhe3072" 00:13:00.774 } 00:13:00.774 } 00:13:00.774 ]' 00:13:00.774 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:00.774 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:00.774 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:01.034 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:01.034 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:01.034 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.034 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.034 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.294 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:13:01.294 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:13:01.861 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.861 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:13:01.861 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.861 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.861 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.861 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:01.861 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:01.861 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:02.121 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:13:02.121 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:02.121 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:02.121 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:02.121 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:02.121 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.121 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.121 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.121 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.121 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.121 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.121 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.121 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.380 00:13:02.380 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:02.380 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.380 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:02.640 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.640 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.640 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.640 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.640 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.640 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.640 { 00:13:02.640 "cntlid": 115, 00:13:02.640 "qid": 0, 00:13:02.640 "state": "enabled", 00:13:02.640 "thread": "nvmf_tgt_poll_group_000", 00:13:02.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:13:02.640 "listen_address": { 00:13:02.640 "trtype": "TCP", 00:13:02.640 "adrfam": "IPv4", 00:13:02.640 "traddr": "10.0.0.3", 00:13:02.640 "trsvcid": "4420" 00:13:02.640 }, 00:13:02.640 "peer_address": { 00:13:02.640 "trtype": "TCP", 00:13:02.640 "adrfam": "IPv4", 00:13:02.640 "traddr": "10.0.0.1", 00:13:02.640 "trsvcid": "37924" 00:13:02.640 }, 00:13:02.640 "auth": { 00:13:02.640 "state": "completed", 00:13:02.640 "digest": "sha512", 00:13:02.640 "dhgroup": "ffdhe3072" 00:13:02.640 } 00:13:02.640 } 00:13:02.640 ]' 00:13:02.640 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:02.900 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:02.900 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:02.900 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:02.900 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:02.900 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.900 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.900 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.160 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:13:03.160 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:13:03.729 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.729 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:13:03.729 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.729 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.729 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.729 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:03.729 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:03.729 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:03.987 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:13:03.987 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:03.987 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:03.987 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:03.987 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:03.987 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.987 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.987 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.987 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.987 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.987 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.987 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.988 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.246 00:13:04.246 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.246 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.246 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.504 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.504 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.504 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.504 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.504 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.504 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.504 { 00:13:04.504 "cntlid": 117, 00:13:04.504 "qid": 0, 00:13:04.504 "state": "enabled", 00:13:04.504 "thread": "nvmf_tgt_poll_group_000", 00:13:04.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:13:04.504 "listen_address": { 00:13:04.504 "trtype": "TCP", 00:13:04.504 "adrfam": "IPv4", 00:13:04.504 "traddr": "10.0.0.3", 00:13:04.505 "trsvcid": "4420" 00:13:04.505 }, 00:13:04.505 "peer_address": { 00:13:04.505 "trtype": "TCP", 00:13:04.505 "adrfam": "IPv4", 00:13:04.505 "traddr": "10.0.0.1", 00:13:04.505 "trsvcid": "37940" 00:13:04.505 }, 00:13:04.505 "auth": { 00:13:04.505 "state": "completed", 00:13:04.505 "digest": "sha512", 00:13:04.505 "dhgroup": "ffdhe3072" 00:13:04.505 } 00:13:04.505 } 00:13:04.505 ]' 00:13:04.505 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.505 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:04.505 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.505 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:04.505 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:04.763 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.763 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.763 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.021 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:13:05.021 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:13:05.588 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.588 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:13:05.588 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.588 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.588 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.588 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:05.588 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:05.588 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:05.848 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:13:05.848 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:05.848 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:05.848 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:05.848 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:05.848 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.848 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key3 00:13:05.848 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.848 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.848 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.848 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:05.848 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:05.848 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:06.106 00:13:06.106 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.106 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:06.106 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.364 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.364 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.364 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.364 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.623 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.623 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:06.623 { 00:13:06.623 "cntlid": 119, 00:13:06.623 "qid": 0, 00:13:06.623 "state": "enabled", 00:13:06.623 "thread": "nvmf_tgt_poll_group_000", 00:13:06.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:13:06.623 "listen_address": { 00:13:06.623 "trtype": "TCP", 00:13:06.623 "adrfam": "IPv4", 00:13:06.623 "traddr": "10.0.0.3", 00:13:06.623 "trsvcid": "4420" 00:13:06.623 }, 00:13:06.623 "peer_address": { 00:13:06.623 "trtype": "TCP", 00:13:06.623 "adrfam": "IPv4", 00:13:06.623 "traddr": "10.0.0.1", 00:13:06.623 "trsvcid": "34058" 00:13:06.623 }, 00:13:06.623 "auth": { 00:13:06.623 "state": "completed", 00:13:06.623 "digest": "sha512", 00:13:06.623 "dhgroup": "ffdhe3072" 00:13:06.623 } 00:13:06.623 } 00:13:06.623 ]' 00:13:06.623 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:06.623 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:06.623 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:06.623 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:06.623 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:06.623 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.623 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.623 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.882 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:13:06.882 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:13:07.449 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.449 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:13:07.449 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.449 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.449 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.449 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:07.449 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:07.449 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:07.449 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:07.707 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:13:07.708 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:07.708 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:07.708 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:07.708 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:07.708 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.708 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:07.708 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.708 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.708 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.708 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:07.708 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:07.708 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:07.966 00:13:07.966 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:07.966 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:07.966 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.224 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.224 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.224 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.224 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.483 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.483 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:08.483 { 00:13:08.483 "cntlid": 121, 00:13:08.483 "qid": 0, 00:13:08.483 "state": "enabled", 00:13:08.483 "thread": "nvmf_tgt_poll_group_000", 00:13:08.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:13:08.483 "listen_address": { 00:13:08.483 "trtype": "TCP", 00:13:08.483 "adrfam": "IPv4", 00:13:08.483 "traddr": "10.0.0.3", 00:13:08.483 "trsvcid": "4420" 00:13:08.483 }, 00:13:08.483 "peer_address": { 00:13:08.483 "trtype": "TCP", 00:13:08.483 "adrfam": "IPv4", 00:13:08.483 "traddr": "10.0.0.1", 00:13:08.483 "trsvcid": "34080" 00:13:08.483 }, 00:13:08.483 "auth": { 00:13:08.483 "state": "completed", 00:13:08.483 "digest": "sha512", 00:13:08.483 "dhgroup": "ffdhe4096" 00:13:08.483 } 00:13:08.483 } 00:13:08.483 ]' 00:13:08.483 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:08.483 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:08.483 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:08.483 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:08.483 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:08.483 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.483 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.483 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.743 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:13:08.743 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:13:09.312 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.312 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:13:09.312 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.312 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.312 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.312 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:09.312 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:09.312 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:09.572 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:13:09.572 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:09.572 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:09.572 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:09.572 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:09.572 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.572 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:09.572 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.572 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.572 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.572 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:09.572 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:09.572 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.141 00:13:10.141 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:10.141 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.141 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:10.141 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.141 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.141 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.141 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.141 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.141 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:10.141 { 00:13:10.141 "cntlid": 123, 00:13:10.141 "qid": 0, 00:13:10.141 "state": "enabled", 00:13:10.141 "thread": "nvmf_tgt_poll_group_000", 00:13:10.141 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:13:10.141 "listen_address": { 00:13:10.141 "trtype": "TCP", 00:13:10.141 "adrfam": "IPv4", 00:13:10.141 "traddr": "10.0.0.3", 00:13:10.141 "trsvcid": "4420" 00:13:10.141 }, 00:13:10.141 "peer_address": { 00:13:10.141 "trtype": "TCP", 00:13:10.141 "adrfam": "IPv4", 00:13:10.141 "traddr": "10.0.0.1", 00:13:10.141 "trsvcid": "34110" 00:13:10.141 }, 00:13:10.141 "auth": { 00:13:10.141 "state": "completed", 00:13:10.141 "digest": "sha512", 00:13:10.141 "dhgroup": "ffdhe4096" 00:13:10.141 } 00:13:10.141 } 00:13:10.141 ]' 00:13:10.141 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:10.141 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:10.141 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:10.401 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:10.401 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:10.401 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.401 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.401 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.660 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:13:10.660 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:13:11.244 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.244 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:13:11.244 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.244 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.244 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.244 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:11.244 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:11.244 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:11.528 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:13:11.528 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:11.528 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:11.528 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:11.528 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:11.528 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:11.528 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:11.528 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.528 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.528 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.528 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:11.528 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:11.528 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:11.787 00:13:11.787 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:11.787 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:11.787 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.047 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.047 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.047 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.047 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.047 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.047 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:12.047 { 00:13:12.047 "cntlid": 125, 00:13:12.047 "qid": 0, 00:13:12.047 "state": "enabled", 00:13:12.047 "thread": "nvmf_tgt_poll_group_000", 00:13:12.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:13:12.047 "listen_address": { 00:13:12.047 "trtype": "TCP", 00:13:12.047 "adrfam": "IPv4", 00:13:12.047 "traddr": "10.0.0.3", 00:13:12.047 "trsvcid": "4420" 00:13:12.047 }, 00:13:12.047 "peer_address": { 00:13:12.047 "trtype": "TCP", 00:13:12.047 "adrfam": "IPv4", 00:13:12.047 "traddr": "10.0.0.1", 00:13:12.047 "trsvcid": "34142" 00:13:12.047 }, 00:13:12.047 "auth": { 00:13:12.047 "state": "completed", 00:13:12.047 "digest": "sha512", 00:13:12.047 "dhgroup": "ffdhe4096" 00:13:12.047 } 00:13:12.047 } 00:13:12.047 ]' 00:13:12.047 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:12.047 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:12.047 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:12.307 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:12.307 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:12.307 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.307 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.307 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.566 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:13:12.566 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:13:13.135 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.135 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:13:13.135 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.135 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.135 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.135 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:13.135 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:13.135 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:13.395 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:13:13.395 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:13.395 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:13.395 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:13.395 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:13.395 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:13.395 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key3 00:13:13.395 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.395 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.395 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.395 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:13.395 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:13.395 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:13.964 00:13:13.964 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:13.964 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:13.964 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.224 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.224 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.224 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.224 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.224 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.224 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:14.224 { 00:13:14.224 "cntlid": 127, 00:13:14.224 "qid": 0, 00:13:14.224 "state": "enabled", 00:13:14.224 "thread": "nvmf_tgt_poll_group_000", 00:13:14.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:13:14.224 "listen_address": { 00:13:14.224 "trtype": "TCP", 00:13:14.224 "adrfam": "IPv4", 00:13:14.224 "traddr": "10.0.0.3", 00:13:14.224 "trsvcid": "4420" 00:13:14.224 }, 00:13:14.224 "peer_address": { 00:13:14.224 "trtype": "TCP", 00:13:14.224 "adrfam": "IPv4", 00:13:14.224 "traddr": "10.0.0.1", 00:13:14.224 "trsvcid": "34152" 00:13:14.224 }, 00:13:14.224 "auth": { 00:13:14.224 "state": "completed", 00:13:14.224 "digest": "sha512", 00:13:14.224 "dhgroup": "ffdhe4096" 00:13:14.224 } 00:13:14.224 } 00:13:14.224 ]' 00:13:14.224 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:14.224 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:14.224 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:14.224 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:14.224 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:14.225 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:14.225 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:14.225 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.484 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:13:14.484 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:13:15.422 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:15.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:15.422 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:13:15.422 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.422 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.422 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.422 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:15.422 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:15.422 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:15.422 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:15.422 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:13:15.422 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:15.422 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:15.422 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:15.422 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:15.422 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.423 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:15.423 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.423 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.423 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.423 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:15.423 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:15.423 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:15.991 00:13:15.991 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:15.991 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:15.991 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:16.251 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:16.251 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:16.251 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.251 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.251 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.251 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:16.251 { 00:13:16.251 "cntlid": 129, 00:13:16.251 "qid": 0, 00:13:16.251 "state": "enabled", 00:13:16.251 "thread": "nvmf_tgt_poll_group_000", 00:13:16.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:13:16.251 "listen_address": { 00:13:16.251 "trtype": "TCP", 00:13:16.251 "adrfam": "IPv4", 00:13:16.251 "traddr": "10.0.0.3", 00:13:16.251 "trsvcid": "4420" 00:13:16.251 }, 00:13:16.251 "peer_address": { 00:13:16.251 "trtype": "TCP", 00:13:16.251 "adrfam": "IPv4", 00:13:16.251 "traddr": "10.0.0.1", 00:13:16.251 "trsvcid": "60924" 00:13:16.251 }, 00:13:16.251 "auth": { 00:13:16.251 "state": "completed", 00:13:16.251 "digest": "sha512", 00:13:16.251 "dhgroup": "ffdhe6144" 00:13:16.251 } 00:13:16.251 } 00:13:16.251 ]' 00:13:16.251 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:16.251 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:16.251 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:16.251 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:16.251 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:16.251 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.251 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.251 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.510 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:13:16.510 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:13:17.077 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.077 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:13:17.077 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.077 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.077 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.077 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:17.077 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:17.078 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:17.335 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:13:17.335 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:17.335 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:17.335 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:17.335 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:17.335 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.335 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:17.335 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.335 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.335 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.335 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:17.335 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:17.335 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:17.902 00:13:17.902 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:17.902 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.902 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.161 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:18.161 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:18.161 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.161 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.161 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.161 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:18.161 { 00:13:18.161 "cntlid": 131, 00:13:18.161 "qid": 0, 00:13:18.161 "state": "enabled", 00:13:18.161 "thread": "nvmf_tgt_poll_group_000", 00:13:18.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:13:18.161 "listen_address": { 00:13:18.161 "trtype": "TCP", 00:13:18.161 "adrfam": "IPv4", 00:13:18.161 "traddr": "10.0.0.3", 00:13:18.161 "trsvcid": "4420" 00:13:18.161 }, 00:13:18.161 "peer_address": { 00:13:18.161 "trtype": "TCP", 00:13:18.161 "adrfam": "IPv4", 00:13:18.161 "traddr": "10.0.0.1", 00:13:18.161 "trsvcid": "60950" 00:13:18.161 }, 00:13:18.161 "auth": { 00:13:18.161 "state": "completed", 00:13:18.161 "digest": "sha512", 00:13:18.161 "dhgroup": "ffdhe6144" 00:13:18.161 } 00:13:18.161 } 00:13:18.161 ]' 00:13:18.161 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:18.161 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:18.161 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:18.161 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:18.161 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:18.161 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:18.161 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:18.161 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.420 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:13:18.420 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:13:18.988 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.988 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:13:18.988 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.988 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.988 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.988 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:18.988 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:18.988 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:19.247 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:13:19.247 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:19.247 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:19.247 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:19.247 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:19.247 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.247 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.247 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.247 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.247 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.247 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.247 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.247 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.815 00:13:19.815 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:19.815 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:19.815 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.074 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.074 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.074 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.074 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.074 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.074 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:20.074 { 00:13:20.074 "cntlid": 133, 00:13:20.074 "qid": 0, 00:13:20.074 "state": "enabled", 00:13:20.074 "thread": "nvmf_tgt_poll_group_000", 00:13:20.074 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:13:20.074 "listen_address": { 00:13:20.074 "trtype": "TCP", 00:13:20.074 "adrfam": "IPv4", 00:13:20.074 "traddr": "10.0.0.3", 00:13:20.074 "trsvcid": "4420" 00:13:20.074 }, 00:13:20.074 "peer_address": { 00:13:20.074 "trtype": "TCP", 00:13:20.074 "adrfam": "IPv4", 00:13:20.074 "traddr": "10.0.0.1", 00:13:20.074 "trsvcid": "60984" 00:13:20.074 }, 00:13:20.074 "auth": { 00:13:20.074 "state": "completed", 00:13:20.074 "digest": "sha512", 00:13:20.074 "dhgroup": "ffdhe6144" 00:13:20.074 } 00:13:20.074 } 00:13:20.074 ]' 00:13:20.074 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:20.074 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:20.074 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:20.334 18:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:20.334 18:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:20.334 18:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.334 18:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.334 18:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.593 18:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:13:20.593 18:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:13:21.162 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.421 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:13:21.422 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.422 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.422 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.422 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:21.422 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:21.422 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:21.681 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:13:21.681 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:21.681 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:21.681 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:21.681 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:21.681 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.681 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key3 00:13:21.681 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.681 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.681 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.681 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:21.681 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:21.681 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:22.250 00:13:22.250 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:22.250 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:22.250 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.509 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.509 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.509 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.509 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.509 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.509 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:22.509 { 00:13:22.509 "cntlid": 135, 00:13:22.509 "qid": 0, 00:13:22.509 "state": "enabled", 00:13:22.509 "thread": "nvmf_tgt_poll_group_000", 00:13:22.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:13:22.509 "listen_address": { 00:13:22.509 "trtype": "TCP", 00:13:22.509 "adrfam": "IPv4", 00:13:22.509 "traddr": "10.0.0.3", 00:13:22.509 "trsvcid": "4420" 00:13:22.509 }, 00:13:22.509 "peer_address": { 00:13:22.509 "trtype": "TCP", 00:13:22.509 "adrfam": "IPv4", 00:13:22.509 "traddr": "10.0.0.1", 00:13:22.509 "trsvcid": "32776" 00:13:22.509 }, 00:13:22.509 "auth": { 00:13:22.509 "state": "completed", 00:13:22.509 "digest": "sha512", 00:13:22.509 "dhgroup": "ffdhe6144" 00:13:22.509 } 00:13:22.509 } 00:13:22.509 ]' 00:13:22.509 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:22.509 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:22.509 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:22.509 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:22.509 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:22.509 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.509 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.509 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.767 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:13:22.767 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:13:23.334 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.334 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:13:23.334 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.334 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.334 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.334 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:23.334 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:23.334 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:23.334 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:23.593 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:13:23.593 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:23.593 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:23.593 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:23.593 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:23.593 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.593 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:23.593 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.593 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.594 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.594 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:23.594 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:23.594 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:24.182 00:13:24.182 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:24.182 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:24.182 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.440 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.440 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.440 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.440 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.440 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.440 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:24.440 { 00:13:24.440 "cntlid": 137, 00:13:24.440 "qid": 0, 00:13:24.440 "state": "enabled", 00:13:24.440 "thread": "nvmf_tgt_poll_group_000", 00:13:24.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:13:24.440 "listen_address": { 00:13:24.440 "trtype": "TCP", 00:13:24.440 "adrfam": "IPv4", 00:13:24.440 "traddr": "10.0.0.3", 00:13:24.440 "trsvcid": "4420" 00:13:24.440 }, 00:13:24.440 "peer_address": { 00:13:24.440 "trtype": "TCP", 00:13:24.440 "adrfam": "IPv4", 00:13:24.440 "traddr": "10.0.0.1", 00:13:24.440 "trsvcid": "32804" 00:13:24.440 }, 00:13:24.440 "auth": { 00:13:24.440 "state": "completed", 00:13:24.440 "digest": "sha512", 00:13:24.440 "dhgroup": "ffdhe8192" 00:13:24.440 } 00:13:24.440 } 00:13:24.440 ]' 00:13:24.440 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:24.440 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:24.440 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:24.698 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:24.698 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:24.698 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.698 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.698 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.956 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:13:24.956 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:13:25.522 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.522 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:13:25.522 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.522 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.522 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.522 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:25.522 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:25.522 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:25.780 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:13:25.780 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:25.780 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:25.780 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:25.780 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:25.780 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.780 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:25.780 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.780 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.780 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.780 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:25.780 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:25.780 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.346 00:13:26.347 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:26.347 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.347 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:26.604 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.604 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.604 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.604 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.604 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.604 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:26.604 { 00:13:26.604 "cntlid": 139, 00:13:26.604 "qid": 0, 00:13:26.604 "state": "enabled", 00:13:26.604 "thread": "nvmf_tgt_poll_group_000", 00:13:26.604 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:13:26.604 "listen_address": { 00:13:26.604 "trtype": "TCP", 00:13:26.604 "adrfam": "IPv4", 00:13:26.604 "traddr": "10.0.0.3", 00:13:26.604 "trsvcid": "4420" 00:13:26.604 }, 00:13:26.604 "peer_address": { 00:13:26.604 "trtype": "TCP", 00:13:26.604 "adrfam": "IPv4", 00:13:26.604 "traddr": "10.0.0.1", 00:13:26.604 "trsvcid": "41920" 00:13:26.604 }, 00:13:26.604 "auth": { 00:13:26.604 "state": "completed", 00:13:26.604 "digest": "sha512", 00:13:26.604 "dhgroup": "ffdhe8192" 00:13:26.604 } 00:13:26.604 } 00:13:26.604 ]' 00:13:26.604 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:26.863 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:26.863 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:26.863 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:26.863 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:26.863 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.863 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.863 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.122 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:13:27.122 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: --dhchap-ctrl-secret DHHC-1:02:ZjBmOWJhMjQzMzFlMTBmOGM1NmQ1YTE4YmE3ZDdkMDlkOWQ0ZjcxYThiZjBmODc1O1O0UA==: 00:13:27.688 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.688 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:13:27.688 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.688 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.688 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.688 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:27.688 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:27.688 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:28.256 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:13:28.256 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:28.256 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:28.256 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:28.256 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:28.256 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.256 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.256 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.256 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.256 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.256 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.256 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.256 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.516 00:13:28.775 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:28.775 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.775 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.034 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.034 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.034 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.034 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.034 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.034 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:29.034 { 00:13:29.034 "cntlid": 141, 00:13:29.034 "qid": 0, 00:13:29.034 "state": "enabled", 00:13:29.034 "thread": "nvmf_tgt_poll_group_000", 00:13:29.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:13:29.034 "listen_address": { 00:13:29.034 "trtype": "TCP", 00:13:29.034 "adrfam": "IPv4", 00:13:29.034 "traddr": "10.0.0.3", 00:13:29.034 "trsvcid": "4420" 00:13:29.034 }, 00:13:29.034 "peer_address": { 00:13:29.034 "trtype": "TCP", 00:13:29.034 "adrfam": "IPv4", 00:13:29.034 "traddr": "10.0.0.1", 00:13:29.034 "trsvcid": "41940" 00:13:29.034 }, 00:13:29.034 "auth": { 00:13:29.034 "state": "completed", 00:13:29.034 "digest": "sha512", 00:13:29.034 "dhgroup": "ffdhe8192" 00:13:29.034 } 00:13:29.034 } 00:13:29.034 ]' 00:13:29.034 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:29.034 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:29.034 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:29.034 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:29.034 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:29.034 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.034 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.034 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.294 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:13:29.294 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:01:NWE0NzFlYWM0ZTEyYzJmNDdjN2M3M2M1YjhhMmY3MjG2xpfy: 00:13:30.231 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.231 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:13:30.231 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.231 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.231 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.231 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:30.231 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:30.231 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:30.231 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:13:30.231 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:30.231 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:30.231 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:30.231 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:30.231 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:30.231 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key3 00:13:30.231 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.231 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.490 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.490 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:30.490 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:30.490 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:31.059 00:13:31.059 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:31.059 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:31.059 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.059 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.059 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.059 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.059 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.059 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.059 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:31.059 { 00:13:31.059 "cntlid": 143, 00:13:31.059 "qid": 0, 00:13:31.059 "state": "enabled", 00:13:31.059 "thread": "nvmf_tgt_poll_group_000", 00:13:31.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:13:31.059 "listen_address": { 00:13:31.059 "trtype": "TCP", 00:13:31.059 "adrfam": "IPv4", 00:13:31.059 "traddr": "10.0.0.3", 00:13:31.059 "trsvcid": "4420" 00:13:31.059 }, 00:13:31.059 "peer_address": { 00:13:31.059 "trtype": "TCP", 00:13:31.059 "adrfam": "IPv4", 00:13:31.059 "traddr": "10.0.0.1", 00:13:31.059 "trsvcid": "41962" 00:13:31.059 }, 00:13:31.059 "auth": { 00:13:31.059 "state": "completed", 00:13:31.059 "digest": "sha512", 00:13:31.059 "dhgroup": "ffdhe8192" 00:13:31.059 } 00:13:31.059 } 00:13:31.059 ]' 00:13:31.059 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:31.059 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:31.059 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:31.318 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:31.318 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:31.318 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.318 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.318 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.577 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:13:31.577 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:13:32.146 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.146 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:13:32.146 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.146 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.146 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.146 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:32.146 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:13:32.146 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:32.146 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:32.146 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:32.146 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:32.405 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:13:32.405 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:32.405 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:32.405 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:32.405 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:32.405 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.405 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:32.405 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.405 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.405 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.405 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:32.405 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:32.405 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:32.974 00:13:32.974 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:32.974 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.974 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.233 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.233 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.234 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.234 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.493 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.493 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:33.493 { 00:13:33.493 "cntlid": 145, 00:13:33.493 "qid": 0, 00:13:33.493 "state": "enabled", 00:13:33.493 "thread": "nvmf_tgt_poll_group_000", 00:13:33.493 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:13:33.493 "listen_address": { 00:13:33.493 "trtype": "TCP", 00:13:33.493 "adrfam": "IPv4", 00:13:33.493 "traddr": "10.0.0.3", 00:13:33.493 "trsvcid": "4420" 00:13:33.493 }, 00:13:33.493 "peer_address": { 00:13:33.493 "trtype": "TCP", 00:13:33.493 "adrfam": "IPv4", 00:13:33.493 "traddr": "10.0.0.1", 00:13:33.493 "trsvcid": "41980" 00:13:33.493 }, 00:13:33.493 "auth": { 00:13:33.493 "state": "completed", 00:13:33.493 "digest": "sha512", 00:13:33.493 "dhgroup": "ffdhe8192" 00:13:33.493 } 00:13:33.493 } 00:13:33.493 ]' 00:13:33.493 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:33.493 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:33.493 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:33.493 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:33.493 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:33.493 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.493 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.493 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.753 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:13:33.753 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:00:MGI0NzhhZWY1NWFjYWMxYTQ0OWFmODllNDg0YmRiNTg5YjNlNzcwOWE0MGJmODA05YMhkw==: --dhchap-ctrl-secret DHHC-1:03:Yzk4MTAzNzEwZGMwMTM3OGY2ZDc1NTFmZmJkNGQwMWY5OWRhYTYyZjhhYjczODVmNWRhMDQ5OGUzNzlmMTkwNauHNpM=: 00:13:34.322 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.322 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:13:34.322 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.322 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.322 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.322 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key1 00:13:34.322 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.322 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.322 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.322 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:13:34.322 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:34.322 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:13:34.322 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:34.322 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:34.322 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:34.322 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:34.322 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:13:34.322 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:34.322 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:34.892 request: 00:13:34.892 { 00:13:34.892 "name": "nvme0", 00:13:34.892 "trtype": "tcp", 00:13:34.892 "traddr": "10.0.0.3", 00:13:34.892 "adrfam": "ipv4", 00:13:34.892 "trsvcid": "4420", 00:13:34.892 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:34.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:13:34.892 "prchk_reftag": false, 00:13:34.892 "prchk_guard": false, 00:13:34.892 "hdgst": false, 00:13:34.892 "ddgst": false, 00:13:34.892 "dhchap_key": "key2", 00:13:34.892 "allow_unrecognized_csi": false, 00:13:34.892 "method": "bdev_nvme_attach_controller", 00:13:34.892 "req_id": 1 00:13:34.892 } 00:13:34.892 Got JSON-RPC error response 00:13:34.892 response: 00:13:34.892 { 00:13:34.892 "code": -5, 00:13:34.892 "message": "Input/output error" 00:13:34.892 } 00:13:34.892 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:34.892 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:34.892 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:34.892 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:34.892 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:13:34.892 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.892 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.892 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.892 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:34.892 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.892 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.892 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.892 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:34.892 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:34.892 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:34.892 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:34.892 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:34.892 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:34.892 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:34.892 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:34.892 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:34.892 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:35.463 request: 00:13:35.463 { 00:13:35.463 "name": "nvme0", 00:13:35.463 "trtype": "tcp", 00:13:35.463 "traddr": "10.0.0.3", 00:13:35.463 "adrfam": "ipv4", 00:13:35.463 "trsvcid": "4420", 00:13:35.463 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:35.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:13:35.463 "prchk_reftag": false, 00:13:35.463 "prchk_guard": false, 00:13:35.463 "hdgst": false, 00:13:35.463 "ddgst": false, 00:13:35.463 "dhchap_key": "key1", 00:13:35.463 "dhchap_ctrlr_key": "ckey2", 00:13:35.463 "allow_unrecognized_csi": false, 00:13:35.463 "method": "bdev_nvme_attach_controller", 00:13:35.463 "req_id": 1 00:13:35.463 } 00:13:35.463 Got JSON-RPC error response 00:13:35.463 response: 00:13:35.463 { 00:13:35.463 "code": -5, 00:13:35.463 "message": "Input/output error" 00:13:35.463 } 00:13:35.463 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:35.463 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:35.463 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:35.463 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:35.463 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:13:35.463 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.463 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.463 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.463 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key1 00:13:35.463 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.463 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.463 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.463 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.463 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:35.463 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.463 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:35.463 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:35.463 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:35.463 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:35.463 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.463 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.463 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:36.051 request: 00:13:36.051 { 00:13:36.051 "name": "nvme0", 00:13:36.051 "trtype": "tcp", 00:13:36.051 "traddr": "10.0.0.3", 00:13:36.051 "adrfam": "ipv4", 00:13:36.051 "trsvcid": "4420", 00:13:36.051 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:36.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:13:36.051 "prchk_reftag": false, 00:13:36.051 "prchk_guard": false, 00:13:36.051 "hdgst": false, 00:13:36.051 "ddgst": false, 00:13:36.051 "dhchap_key": "key1", 00:13:36.051 "dhchap_ctrlr_key": "ckey1", 00:13:36.051 "allow_unrecognized_csi": false, 00:13:36.051 "method": "bdev_nvme_attach_controller", 00:13:36.051 "req_id": 1 00:13:36.051 } 00:13:36.051 Got JSON-RPC error response 00:13:36.051 response: 00:13:36.051 { 00:13:36.051 "code": -5, 00:13:36.051 "message": "Input/output error" 00:13:36.051 } 00:13:36.051 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:36.051 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:36.051 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:36.051 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:36.051 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:13:36.051 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.052 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.052 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.052 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 79238 00:13:36.052 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 79238 ']' 00:13:36.052 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 79238 00:13:36.052 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:13:36.052 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:36.052 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79238 00:13:36.052 killing process with pid 79238 00:13:36.052 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:36.052 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:36.052 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79238' 00:13:36.052 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 79238 00:13:36.052 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 79238 00:13:36.052 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:36.052 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:36.052 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:36.052 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.052 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=82216 00:13:36.052 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:36.052 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 82216 00:13:36.052 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 82216 ']' 00:13:36.052 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.052 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:36.052 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.052 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:36.052 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.431 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:37.431 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:37.431 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:37.431 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:37.431 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.431 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.431 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:37.431 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 82216 00:13:37.431 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 82216 ']' 00:13:37.431 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.431 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:37.431 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.431 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:37.431 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.431 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:37.431 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:37.431 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:13:37.431 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.431 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.690 null0 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.mDw 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.fdj ]] 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fdj 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.4PR 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Hzy ]] 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Hzy 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.4Cs 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.8tL ]] 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8tL 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.VY8 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key3 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:37.690 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:38.627 nvme0n1 00:13:38.627 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:38.627 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:38.627 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.887 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.887 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.887 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.887 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.887 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.887 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:38.887 { 00:13:38.887 "cntlid": 1, 00:13:38.887 "qid": 0, 00:13:38.887 "state": "enabled", 00:13:38.887 "thread": "nvmf_tgt_poll_group_000", 00:13:38.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:13:38.887 "listen_address": { 00:13:38.887 "trtype": "TCP", 00:13:38.887 "adrfam": "IPv4", 00:13:38.887 "traddr": "10.0.0.3", 00:13:38.887 "trsvcid": "4420" 00:13:38.887 }, 00:13:38.887 "peer_address": { 00:13:38.887 "trtype": "TCP", 00:13:38.887 "adrfam": "IPv4", 00:13:38.887 "traddr": "10.0.0.1", 00:13:38.887 "trsvcid": "59684" 00:13:38.887 }, 00:13:38.887 "auth": { 00:13:38.887 "state": "completed", 00:13:38.887 "digest": "sha512", 00:13:38.887 "dhgroup": "ffdhe8192" 00:13:38.887 } 00:13:38.887 } 00:13:38.887 ]' 00:13:38.887 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:38.887 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:38.887 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.146 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:39.146 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:39.146 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.146 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.146 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.425 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:13:39.425 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:13:39.992 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:39.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:39.992 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:13:39.992 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.992 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.992 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.992 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key3 00:13:39.992 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.992 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.992 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.992 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:39.992 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:40.250 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:40.250 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:40.250 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:40.250 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:40.250 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.250 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:40.250 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.250 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:40.250 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:40.250 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:40.508 request: 00:13:40.508 { 00:13:40.508 "name": "nvme0", 00:13:40.508 "trtype": "tcp", 00:13:40.508 "traddr": "10.0.0.3", 00:13:40.508 "adrfam": "ipv4", 00:13:40.508 "trsvcid": "4420", 00:13:40.508 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:40.508 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:13:40.508 "prchk_reftag": false, 00:13:40.508 "prchk_guard": false, 00:13:40.508 "hdgst": false, 00:13:40.508 "ddgst": false, 00:13:40.508 "dhchap_key": "key3", 00:13:40.508 "allow_unrecognized_csi": false, 00:13:40.508 "method": "bdev_nvme_attach_controller", 00:13:40.508 "req_id": 1 00:13:40.508 } 00:13:40.508 Got JSON-RPC error response 00:13:40.508 response: 00:13:40.508 { 00:13:40.508 "code": -5, 00:13:40.508 "message": "Input/output error" 00:13:40.508 } 00:13:40.508 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:40.508 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:40.508 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:40.508 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:40.508 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:13:40.508 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:13:40.508 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:40.508 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:40.766 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:40.766 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:40.766 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:40.766 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:40.766 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.766 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:40.766 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.766 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:40.766 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:40.766 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:41.024 request: 00:13:41.024 { 00:13:41.024 "name": "nvme0", 00:13:41.024 "trtype": "tcp", 00:13:41.024 "traddr": "10.0.0.3", 00:13:41.024 "adrfam": "ipv4", 00:13:41.024 "trsvcid": "4420", 00:13:41.024 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:41.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:13:41.024 "prchk_reftag": false, 00:13:41.024 "prchk_guard": false, 00:13:41.024 "hdgst": false, 00:13:41.024 "ddgst": false, 00:13:41.024 "dhchap_key": "key3", 00:13:41.024 "allow_unrecognized_csi": false, 00:13:41.024 "method": "bdev_nvme_attach_controller", 00:13:41.024 "req_id": 1 00:13:41.024 } 00:13:41.024 Got JSON-RPC error response 00:13:41.024 response: 00:13:41.024 { 00:13:41.024 "code": -5, 00:13:41.024 "message": "Input/output error" 00:13:41.024 } 00:13:41.283 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:41.283 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:41.283 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:41.283 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:41.283 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:41.283 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:13:41.283 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:41.283 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:41.283 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:41.283 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:41.549 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:13:41.549 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.549 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.549 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.549 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:13:41.549 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.549 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.549 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.549 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:41.549 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:41.549 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:41.549 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:41.549 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:41.549 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:41.549 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:41.549 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:41.549 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:41.549 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:41.808 request: 00:13:41.808 { 00:13:41.808 "name": "nvme0", 00:13:41.808 "trtype": "tcp", 00:13:41.808 "traddr": "10.0.0.3", 00:13:41.808 "adrfam": "ipv4", 00:13:41.808 "trsvcid": "4420", 00:13:41.808 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:41.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:13:41.808 "prchk_reftag": false, 00:13:41.808 "prchk_guard": false, 00:13:41.808 "hdgst": false, 00:13:41.808 "ddgst": false, 00:13:41.808 "dhchap_key": "key0", 00:13:41.808 "dhchap_ctrlr_key": "key1", 00:13:41.808 "allow_unrecognized_csi": false, 00:13:41.808 "method": "bdev_nvme_attach_controller", 00:13:41.808 "req_id": 1 00:13:41.808 } 00:13:41.808 Got JSON-RPC error response 00:13:41.808 response: 00:13:41.808 { 00:13:41.808 "code": -5, 00:13:41.808 "message": "Input/output error" 00:13:41.808 } 00:13:41.808 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:41.808 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:41.808 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:41.808 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:41.808 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:13:41.808 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:41.808 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:42.066 nvme0n1 00:13:42.324 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:13:42.324 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:42.324 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:13:42.324 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:42.324 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:42.324 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.890 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key1 00:13:42.890 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.890 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.890 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.890 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:42.890 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:42.890 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:43.457 nvme0n1 00:13:43.457 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:13:43.457 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:13:43.457 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.025 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.025 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:44.025 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.025 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.025 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.025 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:13:44.025 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:13:44.025 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.285 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.285 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:13:44.285 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -l 0 --dhchap-secret DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: --dhchap-ctrl-secret DHHC-1:03:YmNmZWRlZTQzZmFhODUxZDA3MzhjNjI4N2QxOWI4NzE0NGNjYWUyYTA0MTVkNjk2NWExZDYzODliOTEzMDIwNDOFUxY=: 00:13:44.851 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:13:44.851 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:13:44.851 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:13:44.851 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:13:44.851 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:13:44.851 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:13:44.851 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:13:44.851 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.851 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.117 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:13:45.117 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:45.117 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:13:45.117 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:45.117 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.118 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:45.118 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.118 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:45.118 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:45.118 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:45.684 request: 00:13:45.684 { 00:13:45.684 "name": "nvme0", 00:13:45.684 "trtype": "tcp", 00:13:45.684 "traddr": "10.0.0.3", 00:13:45.684 "adrfam": "ipv4", 00:13:45.684 "trsvcid": "4420", 00:13:45.684 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:45.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c", 00:13:45.684 "prchk_reftag": false, 00:13:45.684 "prchk_guard": false, 00:13:45.684 "hdgst": false, 00:13:45.684 "ddgst": false, 00:13:45.684 "dhchap_key": "key1", 00:13:45.684 "allow_unrecognized_csi": false, 00:13:45.684 "method": "bdev_nvme_attach_controller", 00:13:45.684 "req_id": 1 00:13:45.684 } 00:13:45.684 Got JSON-RPC error response 00:13:45.684 response: 00:13:45.684 { 00:13:45.684 "code": -5, 00:13:45.684 "message": "Input/output error" 00:13:45.684 } 00:13:45.684 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:45.684 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:45.684 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:45.684 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:45.684 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:45.684 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:45.684 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:46.618 nvme0n1 00:13:46.618 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:13:46.618 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:13:46.618 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.876 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.876 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.876 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.134 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:13:47.134 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.134 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.134 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.134 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:13:47.134 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:47.134 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:47.391 nvme0n1 00:13:47.649 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:13:47.649 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.649 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:13:47.917 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.917 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.917 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.208 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:48.208 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.208 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.208 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.208 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: '' 2s 00:13:48.208 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:48.208 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:48.208 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: 00:13:48.208 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:13:48.208 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:48.208 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:48.208 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: ]] 00:13:48.208 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZjQ4OTE5NzI0MTNmOGM3M2I3NWYwMGQ2ODNlOGM1ZGb6L6jN: 00:13:48.208 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:13:48.208 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:48.208 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:50.124 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:13:50.124 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:13:50.124 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:13:50.124 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:13:50.124 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:13:50.124 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:13:50.124 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:13:50.125 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key1 --dhchap-ctrlr-key key2 00:13:50.125 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.125 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.383 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.383 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: 2s 00:13:50.383 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:50.383 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:50.383 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:13:50.383 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: 00:13:50.383 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:50.383 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:50.383 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:13:50.383 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: ]] 00:13:50.383 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NDkzZDNjNGM2OGU1MWM1MGU5ZTk0MmQ4ZmMzMDk1ODlmYzhkMDlmMDczZDM3ZGEyZnBAqA==: 00:13:50.383 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:50.383 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:52.284 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:13:52.284 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:13:52.284 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:13:52.284 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:13:52.284 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:13:52.284 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:13:52.284 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:13:52.284 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.284 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:52.284 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.284 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.284 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.284 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:52.284 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:52.284 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:53.219 nvme0n1 00:13:53.219 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:53.219 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.219 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.219 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.219 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:53.219 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:54.154 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:13:54.154 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.154 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:13:54.154 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.154 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:13:54.154 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.154 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.154 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.155 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:13:54.155 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:13:54.413 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:13:54.413 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:13:54.413 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.671 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.671 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:54.671 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.671 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.930 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.930 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:54.930 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:54.930 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:54.930 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:54.930 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:54.930 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:54.930 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:54.930 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:54.930 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:55.497 request: 00:13:55.497 { 00:13:55.497 "name": "nvme0", 00:13:55.497 "dhchap_key": "key1", 00:13:55.497 "dhchap_ctrlr_key": "key3", 00:13:55.497 "method": "bdev_nvme_set_keys", 00:13:55.497 "req_id": 1 00:13:55.497 } 00:13:55.497 Got JSON-RPC error response 00:13:55.497 response: 00:13:55.497 { 00:13:55.497 "code": -13, 00:13:55.497 "message": "Permission denied" 00:13:55.497 } 00:13:55.497 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:55.497 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:55.497 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:55.497 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:55.497 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:55.497 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.497 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:55.755 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:13:55.755 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:13:56.691 18:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:56.691 18:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.691 18:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:56.950 18:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:13:56.950 18:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:56.950 18:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.950 18:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.950 18:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.950 18:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:56.950 18:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:56.950 18:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:57.888 nvme0n1 00:13:57.888 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:57.888 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.888 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.146 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.146 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:58.146 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:58.146 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:58.146 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:58.146 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:58.146 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:58.146 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:58.146 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:58.146 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:58.711 request: 00:13:58.711 { 00:13:58.711 "name": "nvme0", 00:13:58.711 "dhchap_key": "key2", 00:13:58.711 "dhchap_ctrlr_key": "key0", 00:13:58.711 "method": "bdev_nvme_set_keys", 00:13:58.711 "req_id": 1 00:13:58.711 } 00:13:58.711 Got JSON-RPC error response 00:13:58.711 response: 00:13:58.711 { 00:13:58.711 "code": -13, 00:13:58.711 "message": "Permission denied" 00:13:58.711 } 00:13:58.711 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:58.711 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:58.711 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:58.711 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:58.711 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:58.711 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.711 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:58.969 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:13:58.969 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:14:00.346 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:00.346 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:00.346 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.347 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:14:00.347 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:14:00.347 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:14:00.347 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 79270 00:14:00.347 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 79270 ']' 00:14:00.347 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 79270 00:14:00.347 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:14:00.347 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:00.347 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79270 00:14:00.347 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:00.347 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:00.347 killing process with pid 79270 00:14:00.347 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79270' 00:14:00.347 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 79270 00:14:00.347 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 79270 00:14:00.916 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:00.916 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:00.916 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:14:00.916 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:00.916 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:14:00.916 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:00.916 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:00.916 rmmod nvme_tcp 00:14:00.916 rmmod nvme_fabrics 00:14:00.916 rmmod nvme_keyring 00:14:00.916 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:00.916 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:14:00.916 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:14:00.916 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 82216 ']' 00:14:00.916 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 82216 00:14:00.916 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 82216 ']' 00:14:00.916 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 82216 00:14:00.916 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:14:00.916 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:00.916 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82216 00:14:00.916 killing process with pid 82216 00:14:00.916 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:00.916 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:00.916 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82216' 00:14:00.916 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 82216 00:14:00.916 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 82216 00:14:01.175 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:01.175 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:01.175 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:01.175 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:14:01.175 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:14:01.175 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:01.175 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:14:01.175 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:01.175 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:01.175 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:01.175 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:01.175 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:01.175 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:01.175 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:01.175 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:01.175 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:01.175 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:01.175 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:01.435 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:01.435 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:01.435 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:01.435 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:01.435 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:01.435 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.435 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:01.435 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.435 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:14:01.435 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.mDw /tmp/spdk.key-sha256.4PR /tmp/spdk.key-sha384.4Cs /tmp/spdk.key-sha512.VY8 /tmp/spdk.key-sha512.fdj /tmp/spdk.key-sha384.Hzy /tmp/spdk.key-sha256.8tL '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:01.435 00:14:01.435 real 3m1.457s 00:14:01.435 user 7m14.040s 00:14:01.435 sys 0m28.142s 00:14:01.435 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:01.436 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.436 ************************************ 00:14:01.436 END TEST nvmf_auth_target 00:14:01.436 ************************************ 00:14:01.436 18:31:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:14:01.436 18:31:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:01.436 18:31:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:01.436 18:31:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:01.436 18:31:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:01.436 ************************************ 00:14:01.436 START TEST nvmf_bdevio_no_huge 00:14:01.436 ************************************ 00:14:01.436 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:01.696 * Looking for test storage... 00:14:01.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:01.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.696 --rc genhtml_branch_coverage=1 00:14:01.696 --rc genhtml_function_coverage=1 00:14:01.696 --rc genhtml_legend=1 00:14:01.696 --rc geninfo_all_blocks=1 00:14:01.696 --rc geninfo_unexecuted_blocks=1 00:14:01.696 00:14:01.696 ' 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:01.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.696 --rc genhtml_branch_coverage=1 00:14:01.696 --rc genhtml_function_coverage=1 00:14:01.696 --rc genhtml_legend=1 00:14:01.696 --rc geninfo_all_blocks=1 00:14:01.696 --rc geninfo_unexecuted_blocks=1 00:14:01.696 00:14:01.696 ' 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:01.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.696 --rc genhtml_branch_coverage=1 00:14:01.696 --rc genhtml_function_coverage=1 00:14:01.696 --rc genhtml_legend=1 00:14:01.696 --rc geninfo_all_blocks=1 00:14:01.696 --rc geninfo_unexecuted_blocks=1 00:14:01.696 00:14:01.696 ' 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:01.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.696 --rc genhtml_branch_coverage=1 00:14:01.696 --rc genhtml_function_coverage=1 00:14:01.696 --rc genhtml_legend=1 00:14:01.696 --rc geninfo_all_blocks=1 00:14:01.696 --rc geninfo_unexecuted_blocks=1 00:14:01.696 00:14:01.696 ' 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.696 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:01.697 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@456 -- # nvmf_veth_init 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:01.697 Cannot find device "nvmf_init_br" 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:01.697 Cannot find device "nvmf_init_br2" 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:01.697 Cannot find device "nvmf_tgt_br" 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:01.697 Cannot find device "nvmf_tgt_br2" 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:01.697 Cannot find device "nvmf_init_br" 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:01.697 Cannot find device "nvmf_init_br2" 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:14:01.697 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:01.697 Cannot find device "nvmf_tgt_br" 00:14:01.958 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:14:01.958 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:01.958 Cannot find device "nvmf_tgt_br2" 00:14:01.958 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:14:01.958 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:01.958 Cannot find device "nvmf_br" 00:14:01.958 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:14:01.958 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:01.958 Cannot find device "nvmf_init_if" 00:14:01.958 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:14:01.958 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:01.958 Cannot find device "nvmf_init_if2" 00:14:01.958 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:14:01.958 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:01.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:01.958 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:14:01.958 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:01.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:01.959 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:14:01.959 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:01.959 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:01.959 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:01.959 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:01.959 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:01.959 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:01.959 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:01.959 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:01.959 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:01.959 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:01.959 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:01.959 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:01.959 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:01.959 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:01.959 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:01.959 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:01.959 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:01.959 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:01.959 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:01.959 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:01.959 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:01.959 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:01.959 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:01.959 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:01.959 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:02.219 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:02.219 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:14:02.219 00:14:02.219 --- 10.0.0.3 ping statistics --- 00:14:02.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.219 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:02.219 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:02.219 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:14:02.219 00:14:02.219 --- 10.0.0.4 ping statistics --- 00:14:02.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.219 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:02.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:14:02.219 00:14:02.219 --- 10.0.0.1 ping statistics --- 00:14:02.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.219 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:02.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:14:02.219 00:14:02.219 --- 10.0.0.2 ping statistics --- 00:14:02.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.219 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@457 -- # return 0 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=82858 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 82858 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 82858 ']' 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:02.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:02.219 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:02.219 [2024-12-08 18:31:20.021688] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:02.219 [2024-12-08 18:31:20.021852] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:02.481 [2024-12-08 18:31:20.170766] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:02.481 [2024-12-08 18:31:20.284203] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.481 [2024-12-08 18:31:20.284263] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.481 [2024-12-08 18:31:20.284278] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.481 [2024-12-08 18:31:20.284289] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.481 [2024-12-08 18:31:20.284298] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.481 [2024-12-08 18:31:20.284458] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:14:02.481 [2024-12-08 18:31:20.285051] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:14:02.481 [2024-12-08 18:31:20.285132] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:14:02.481 [2024-12-08 18:31:20.285243] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.481 [2024-12-08 18:31:20.291359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:03.416 [2024-12-08 18:31:21.115552] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:03.416 Malloc0 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:03.416 [2024-12-08 18:31:21.164073] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:14:03.416 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:14:03.416 { 00:14:03.416 "params": { 00:14:03.416 "name": "Nvme$subsystem", 00:14:03.416 "trtype": "$TEST_TRANSPORT", 00:14:03.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:03.416 "adrfam": "ipv4", 00:14:03.416 "trsvcid": "$NVMF_PORT", 00:14:03.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:03.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:03.417 "hdgst": ${hdgst:-false}, 00:14:03.417 "ddgst": ${ddgst:-false} 00:14:03.417 }, 00:14:03.417 "method": "bdev_nvme_attach_controller" 00:14:03.417 } 00:14:03.417 EOF 00:14:03.417 )") 00:14:03.417 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:14:03.417 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:14:03.417 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:14:03.417 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:14:03.417 "params": { 00:14:03.417 "name": "Nvme1", 00:14:03.417 "trtype": "tcp", 00:14:03.417 "traddr": "10.0.0.3", 00:14:03.417 "adrfam": "ipv4", 00:14:03.417 "trsvcid": "4420", 00:14:03.417 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:03.417 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:03.417 "hdgst": false, 00:14:03.417 "ddgst": false 00:14:03.417 }, 00:14:03.417 "method": "bdev_nvme_attach_controller" 00:14:03.417 }' 00:14:03.417 [2024-12-08 18:31:21.225181] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:03.417 [2024-12-08 18:31:21.225272] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82897 ] 00:14:03.675 [2024-12-08 18:31:21.364513] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:03.675 [2024-12-08 18:31:21.477035] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.675 [2024-12-08 18:31:21.477187] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.675 [2024-12-08 18:31:21.477197] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.675 [2024-12-08 18:31:21.491563] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:03.934 I/O targets: 00:14:03.934 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:03.934 00:14:03.934 00:14:03.934 CUnit - A unit testing framework for C - Version 2.1-3 00:14:03.934 http://cunit.sourceforge.net/ 00:14:03.934 00:14:03.934 00:14:03.934 Suite: bdevio tests on: Nvme1n1 00:14:03.934 Test: blockdev write read block ...passed 00:14:03.934 Test: blockdev write zeroes read block ...passed 00:14:03.934 Test: blockdev write zeroes read no split ...passed 00:14:03.934 Test: blockdev write zeroes read split ...passed 00:14:03.934 Test: blockdev write zeroes read split partial ...passed 00:14:03.934 Test: blockdev reset ...[2024-12-08 18:31:21.720650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:03.934 [2024-12-08 18:31:21.720760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x263c2d0 (9): Bad file descriptor 00:14:03.934 [2024-12-08 18:31:21.738431] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:03.934 passed 00:14:03.934 Test: blockdev write read 8 blocks ...passed 00:14:03.934 Test: blockdev write read size > 128k ...passed 00:14:03.934 Test: blockdev write read invalid size ...passed 00:14:03.934 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:03.934 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:03.934 Test: blockdev write read max offset ...passed 00:14:03.934 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:03.934 Test: blockdev writev readv 8 blocks ...passed 00:14:03.934 Test: blockdev writev readv 30 x 1block ...passed 00:14:03.934 Test: blockdev writev readv block ...passed 00:14:03.934 Test: blockdev writev readv size > 128k ...passed 00:14:03.934 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:03.934 Test: blockdev comparev and writev ...[2024-12-08 18:31:21.748676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.934 [2024-12-08 18:31:21.748728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:03.934 [2024-12-08 18:31:21.748767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.934 [2024-12-08 18:31:21.748778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:03.934 [2024-12-08 18:31:21.749160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.934 [2024-12-08 18:31:21.749190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:03.934 [2024-12-08 18:31:21.749208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.934 [2024-12-08 18:31:21.749220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:03.935 [2024-12-08 18:31:21.749620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.935 [2024-12-08 18:31:21.749654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:03.935 [2024-12-08 18:31:21.749673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.935 [2024-12-08 18:31:21.749684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:03.935 [2024-12-08 18:31:21.750063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.935 [2024-12-08 18:31:21.750098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:03.935 [2024-12-08 18:31:21.750117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.935 [2024-12-08 18:31:21.750128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:03.935 passed 00:14:03.935 Test: blockdev nvme passthru rw ...passed 00:14:03.935 Test: blockdev nvme passthru vendor specific ...[2024-12-08 18:31:21.751468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:03.935 [2024-12-08 18:31:21.751609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:03.935 [2024-12-08 18:31:21.751934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:03.935 [2024-12-08 18:31:21.751966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:03.935 [2024-12-08 18:31:21.752119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:03.935 [2024-12-08 18:31:21.752148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:03.935 [2024-12-08 18:31:21.752450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:03.935 [2024-12-08 18:31:21.752485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:03.935 passed 00:14:03.935 Test: blockdev nvme admin passthru ...passed 00:14:03.935 Test: blockdev copy ...passed 00:14:03.935 00:14:03.935 Run Summary: Type Total Ran Passed Failed Inactive 00:14:03.935 suites 1 1 n/a 0 0 00:14:03.935 tests 23 23 23 0 0 00:14:03.935 asserts 152 152 152 0 n/a 00:14:03.935 00:14:03.935 Elapsed time = 0.179 seconds 00:14:04.193 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:04.193 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.193 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:04.193 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.193 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:04.193 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:04.193 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:04.193 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:14:04.453 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:04.453 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:14:04.453 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:04.453 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:04.453 rmmod nvme_tcp 00:14:04.453 rmmod nvme_fabrics 00:14:04.453 rmmod nvme_keyring 00:14:04.453 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:04.453 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:14:04.453 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:14:04.453 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 82858 ']' 00:14:04.453 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 82858 00:14:04.453 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 82858 ']' 00:14:04.453 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 82858 00:14:04.453 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:14:04.453 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:04.453 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82858 00:14:04.453 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:14:04.453 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:14:04.453 killing process with pid 82858 00:14:04.453 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82858' 00:14:04.453 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 82858 00:14:04.453 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 82858 00:14:04.712 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:04.712 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:04.712 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:04.712 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:14:04.712 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:14:04.712 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:04.712 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:14:04.712 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:04.712 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:04.712 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:04.972 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:04.972 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:04.972 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:04.972 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:04.972 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:04.972 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:04.972 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:04.972 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:04.972 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:04.972 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:04.972 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:04.972 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:04.972 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:04.972 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.972 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:04.972 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.972 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:14:04.972 00:14:04.972 real 0m3.573s 00:14:04.972 user 0m10.722s 00:14:04.972 sys 0m1.430s 00:14:04.972 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:04.972 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:04.972 ************************************ 00:14:04.972 END TEST nvmf_bdevio_no_huge 00:14:04.972 ************************************ 00:14:05.232 18:31:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:05.232 18:31:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:05.232 18:31:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:05.232 18:31:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:05.232 ************************************ 00:14:05.232 START TEST nvmf_tls 00:14:05.232 ************************************ 00:14:05.232 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:05.232 * Looking for test storage... 00:14:05.232 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:05.232 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:05.232 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:14:05.232 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:05.232 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:05.232 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:05.232 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:05.232 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:05.232 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:14:05.232 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:14:05.232 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:14:05.232 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:14:05.232 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:14:05.232 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:14:05.232 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:05.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.233 --rc genhtml_branch_coverage=1 00:14:05.233 --rc genhtml_function_coverage=1 00:14:05.233 --rc genhtml_legend=1 00:14:05.233 --rc geninfo_all_blocks=1 00:14:05.233 --rc geninfo_unexecuted_blocks=1 00:14:05.233 00:14:05.233 ' 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:05.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.233 --rc genhtml_branch_coverage=1 00:14:05.233 --rc genhtml_function_coverage=1 00:14:05.233 --rc genhtml_legend=1 00:14:05.233 --rc geninfo_all_blocks=1 00:14:05.233 --rc geninfo_unexecuted_blocks=1 00:14:05.233 00:14:05.233 ' 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:05.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.233 --rc genhtml_branch_coverage=1 00:14:05.233 --rc genhtml_function_coverage=1 00:14:05.233 --rc genhtml_legend=1 00:14:05.233 --rc geninfo_all_blocks=1 00:14:05.233 --rc geninfo_unexecuted_blocks=1 00:14:05.233 00:14:05.233 ' 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:05.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.233 --rc genhtml_branch_coverage=1 00:14:05.233 --rc genhtml_function_coverage=1 00:14:05.233 --rc genhtml_legend=1 00:14:05.233 --rc geninfo_all_blocks=1 00:14:05.233 --rc geninfo_unexecuted_blocks=1 00:14:05.233 00:14:05.233 ' 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:05.233 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@456 -- # nvmf_veth_init 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:05.233 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:05.234 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:05.234 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.234 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:05.234 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:05.234 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:05.234 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:05.234 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:05.234 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:05.234 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.234 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:05.234 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:05.234 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:05.234 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:05.234 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:05.234 Cannot find device "nvmf_init_br" 00:14:05.234 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:05.234 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:05.234 Cannot find device "nvmf_init_br2" 00:14:05.234 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:05.234 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:05.494 Cannot find device "nvmf_tgt_br" 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:05.494 Cannot find device "nvmf_tgt_br2" 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:05.494 Cannot find device "nvmf_init_br" 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:05.494 Cannot find device "nvmf_init_br2" 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:05.494 Cannot find device "nvmf_tgt_br" 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:05.494 Cannot find device "nvmf_tgt_br2" 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:05.494 Cannot find device "nvmf_br" 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:05.494 Cannot find device "nvmf_init_if" 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:05.494 Cannot find device "nvmf_init_if2" 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:05.494 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:05.494 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:05.494 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:05.754 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:05.755 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:05.755 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:14:05.755 00:14:05.755 --- 10.0.0.3 ping statistics --- 00:14:05.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.755 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:05.755 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:05.755 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:14:05.755 00:14:05.755 --- 10.0.0.4 ping statistics --- 00:14:05.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.755 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:05.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:05.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:14:05.755 00:14:05.755 --- 10.0.0.1 ping statistics --- 00:14:05.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.755 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:05.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:05.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:14:05.755 00:14:05.755 --- 10.0.0.2 ping statistics --- 00:14:05.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.755 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@457 -- # return 0 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=83137 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 83137 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83137 ']' 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:05.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:05.755 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:05.755 [2024-12-08 18:31:23.557390] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:05.755 [2024-12-08 18:31:23.557515] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.014 [2024-12-08 18:31:23.700907] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.014 [2024-12-08 18:31:23.778098] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.014 [2024-12-08 18:31:23.778167] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.014 [2024-12-08 18:31:23.778181] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:06.014 [2024-12-08 18:31:23.778191] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:06.014 [2024-12-08 18:31:23.778200] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.014 [2024-12-08 18:31:23.778232] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.953 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:06.953 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:06.953 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:06.953 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:06.953 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:06.953 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.953 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:14:06.953 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:06.953 true 00:14:06.953 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:06.953 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:14:07.520 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:14:07.520 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:14:07.521 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:07.521 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:07.521 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:14:07.780 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:14:07.780 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:14:07.780 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:08.038 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:14:08.038 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:08.605 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:14:08.605 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:14:08.605 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:08.605 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:14:08.605 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:14:08.605 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:14:08.605 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:08.864 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:08.864 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:14:09.123 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:14:09.123 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:14:09.123 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:09.382 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:09.382 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:14:09.640 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:14:09.640 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:14:09.640 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:09.640 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:09.640 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:14:09.640 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:14:09.640 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:14:09.640 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:14:09.640 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:14:09.900 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:09.900 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:09.900 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:09.900 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:14:09.900 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:14:09.900 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:14:09.900 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:14:09.900 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:14:09.900 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:09.900 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:09.900 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.1xlFg3faTh 00:14:09.900 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:14:09.900 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.WdfcquErKp 00:14:09.900 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:09.900 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:09.900 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.1xlFg3faTh 00:14:09.900 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.WdfcquErKp 00:14:09.900 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:10.159 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:10.416 [2024-12-08 18:31:28.252417] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:10.416 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.1xlFg3faTh 00:14:10.416 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.1xlFg3faTh 00:14:10.416 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:10.673 [2024-12-08 18:31:28.560080] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.674 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:10.932 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:11.190 [2024-12-08 18:31:29.064213] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:11.190 [2024-12-08 18:31:29.064488] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:11.190 18:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:11.448 malloc0 00:14:11.448 18:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:11.707 18:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.1xlFg3faTh 00:14:11.964 18:31:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:12.222 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.1xlFg3faTh 00:14:24.462 Initializing NVMe Controllers 00:14:24.462 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:24.462 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:24.462 Initialization complete. Launching workers. 00:14:24.462 ======================================================== 00:14:24.462 Latency(us) 00:14:24.462 Device Information : IOPS MiB/s Average min max 00:14:24.462 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10102.48 39.46 6336.11 1455.99 12120.13 00:14:24.462 ======================================================== 00:14:24.462 Total : 10102.48 39.46 6336.11 1455.99 12120.13 00:14:24.462 00:14:24.462 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1xlFg3faTh 00:14:24.462 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:24.462 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:24.462 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:24.462 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.1xlFg3faTh 00:14:24.462 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:24.462 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83377 00:14:24.462 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:24.462 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:24.462 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83377 /var/tmp/bdevperf.sock 00:14:24.462 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83377 ']' 00:14:24.462 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:24.462 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:24.462 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:24.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:24.462 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:24.462 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:24.462 [2024-12-08 18:31:40.374086] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:24.462 [2024-12-08 18:31:40.374197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83377 ] 00:14:24.462 [2024-12-08 18:31:40.516090] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.462 [2024-12-08 18:31:40.615714] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:24.462 [2024-12-08 18:31:40.693962] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:24.462 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:24.462 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:24.462 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1xlFg3faTh 00:14:24.462 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:24.462 [2024-12-08 18:31:41.886780] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:24.462 TLSTESTn1 00:14:24.462 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:24.462 Running I/O for 10 seconds... 00:14:26.334 4305.00 IOPS, 16.82 MiB/s [2024-12-08T18:31:45.201Z] 4330.50 IOPS, 16.92 MiB/s [2024-12-08T18:31:46.134Z] 4397.33 IOPS, 17.18 MiB/s [2024-12-08T18:31:47.507Z] 4302.75 IOPS, 16.81 MiB/s [2024-12-08T18:31:48.440Z] 4318.60 IOPS, 16.87 MiB/s [2024-12-08T18:31:49.374Z] 4358.67 IOPS, 17.03 MiB/s [2024-12-08T18:31:50.388Z] 4363.71 IOPS, 17.05 MiB/s [2024-12-08T18:31:51.325Z] 4350.50 IOPS, 16.99 MiB/s [2024-12-08T18:31:52.261Z] 4373.78 IOPS, 17.09 MiB/s [2024-12-08T18:31:52.261Z] 4398.10 IOPS, 17.18 MiB/s 00:14:34.331 Latency(us) 00:14:34.331 [2024-12-08T18:31:52.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.331 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:34.332 Verification LBA range: start 0x0 length 0x2000 00:14:34.332 TLSTESTn1 : 10.01 4404.07 17.20 0.00 0.00 29017.14 4796.04 23473.80 00:14:34.332 [2024-12-08T18:31:52.262Z] =================================================================================================================== 00:14:34.332 [2024-12-08T18:31:52.262Z] Total : 4404.07 17.20 0.00 0.00 29017.14 4796.04 23473.80 00:14:34.332 { 00:14:34.332 "results": [ 00:14:34.332 { 00:14:34.332 "job": "TLSTESTn1", 00:14:34.332 "core_mask": "0x4", 00:14:34.332 "workload": "verify", 00:14:34.332 "status": "finished", 00:14:34.332 "verify_range": { 00:14:34.332 "start": 0, 00:14:34.332 "length": 8192 00:14:34.332 }, 00:14:34.332 "queue_depth": 128, 00:14:34.332 "io_size": 4096, 00:14:34.332 "runtime": 10.014606, 00:14:34.332 "iops": 4404.067419127622, 00:14:34.332 "mibps": 17.203388355967274, 00:14:34.332 "io_failed": 0, 00:14:34.332 "io_timeout": 0, 00:14:34.332 "avg_latency_us": 29017.138075542865, 00:14:34.332 "min_latency_us": 4796.043636363636, 00:14:34.332 "max_latency_us": 23473.803636363635 00:14:34.332 } 00:14:34.332 ], 00:14:34.332 "core_count": 1 00:14:34.332 } 00:14:34.332 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:34.332 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83377 00:14:34.332 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83377 ']' 00:14:34.332 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83377 00:14:34.332 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:34.332 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:34.332 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83377 00:14:34.332 killing process with pid 83377 00:14:34.332 Received shutdown signal, test time was about 10.000000 seconds 00:14:34.332 00:14:34.332 Latency(us) 00:14:34.332 [2024-12-08T18:31:52.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.332 [2024-12-08T18:31:52.262Z] =================================================================================================================== 00:14:34.332 [2024-12-08T18:31:52.262Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:34.332 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:34.332 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:34.332 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83377' 00:14:34.332 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83377 00:14:34.332 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83377 00:14:34.591 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WdfcquErKp 00:14:34.591 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:34.591 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WdfcquErKp 00:14:34.591 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:34.591 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:34.591 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:34.591 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:34.591 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WdfcquErKp 00:14:34.591 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:34.591 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:34.591 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:34.591 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.WdfcquErKp 00:14:34.591 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:34.591 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83513 00:14:34.591 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:34.591 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:34.591 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83513 /var/tmp/bdevperf.sock 00:14:34.591 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83513 ']' 00:14:34.591 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:34.591 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:34.591 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:34.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:34.591 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:34.591 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:34.591 [2024-12-08 18:31:52.411489] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:34.591 [2024-12-08 18:31:52.411593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83513 ] 00:14:34.850 [2024-12-08 18:31:52.541526] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.851 [2024-12-08 18:31:52.623648] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:34.851 [2024-12-08 18:31:52.674065] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:34.851 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:34.851 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:34.851 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WdfcquErKp 00:14:35.109 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:35.367 [2024-12-08 18:31:53.242094] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:35.367 [2024-12-08 18:31:53.250324] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:35.367 [2024-12-08 18:31:53.250752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab8d30 (107): Transport endpoint is not connected 00:14:35.367 [2024-12-08 18:31:53.251739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab8d30 (9): Bad file descriptor 00:14:35.367 [2024-12-08 18:31:53.252737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:35.367 [2024-12-08 18:31:53.252760] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:35.367 [2024-12-08 18:31:53.252770] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:35.367 [2024-12-08 18:31:53.252781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:35.367 request: 00:14:35.367 { 00:14:35.367 "name": "TLSTEST", 00:14:35.367 "trtype": "tcp", 00:14:35.367 "traddr": "10.0.0.3", 00:14:35.367 "adrfam": "ipv4", 00:14:35.367 "trsvcid": "4420", 00:14:35.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:35.367 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:35.367 "prchk_reftag": false, 00:14:35.367 "prchk_guard": false, 00:14:35.367 "hdgst": false, 00:14:35.367 "ddgst": false, 00:14:35.367 "psk": "key0", 00:14:35.367 "allow_unrecognized_csi": false, 00:14:35.367 "method": "bdev_nvme_attach_controller", 00:14:35.367 "req_id": 1 00:14:35.367 } 00:14:35.367 Got JSON-RPC error response 00:14:35.367 response: 00:14:35.367 { 00:14:35.367 "code": -5, 00:14:35.367 "message": "Input/output error" 00:14:35.367 } 00:14:35.367 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83513 00:14:35.367 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83513 ']' 00:14:35.367 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83513 00:14:35.367 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:35.367 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:35.367 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83513 00:14:35.626 killing process with pid 83513 00:14:35.626 Received shutdown signal, test time was about 10.000000 seconds 00:14:35.626 00:14:35.626 Latency(us) 00:14:35.626 [2024-12-08T18:31:53.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.626 [2024-12-08T18:31:53.556Z] =================================================================================================================== 00:14:35.626 [2024-12-08T18:31:53.556Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:35.626 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:35.626 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:35.626 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83513' 00:14:35.626 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83513 00:14:35.626 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83513 00:14:35.886 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:35.886 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:35.886 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:35.886 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:35.886 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:35.886 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1xlFg3faTh 00:14:35.886 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:35.886 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1xlFg3faTh 00:14:35.886 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:35.886 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:35.886 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:35.886 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:35.886 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1xlFg3faTh 00:14:35.886 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:35.886 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:35.886 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:35.886 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.1xlFg3faTh 00:14:35.886 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:35.886 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83534 00:14:35.886 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:35.886 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:35.886 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83534 /var/tmp/bdevperf.sock 00:14:35.886 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83534 ']' 00:14:35.886 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:35.886 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:35.886 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:35.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:35.886 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:35.886 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:35.886 [2024-12-08 18:31:53.628421] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:35.886 [2024-12-08 18:31:53.628701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83534 ] 00:14:35.886 [2024-12-08 18:31:53.761501] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.145 [2024-12-08 18:31:53.850325] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:36.145 [2024-12-08 18:31:53.928521] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:36.712 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:36.712 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:36.712 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1xlFg3faTh 00:14:36.972 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:14:37.231 [2024-12-08 18:31:55.135092] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:37.231 [2024-12-08 18:31:55.141430] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:37.231 [2024-12-08 18:31:55.141928] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:37.231 [2024-12-08 18:31:55.142020] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:37.231 [2024-12-08 18:31:55.142154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc15d30 (107): Transport endpoint is not connected 00:14:37.231 [2024-12-08 18:31:55.143137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc15d30 (9): Bad file descriptor 00:14:37.231 [2024-12-08 18:31:55.144130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:37.231 [2024-12-08 18:31:55.144170] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:37.231 [2024-12-08 18:31:55.144185] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:37.231 [2024-12-08 18:31:55.144198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:37.231 request: 00:14:37.231 { 00:14:37.231 "name": "TLSTEST", 00:14:37.231 "trtype": "tcp", 00:14:37.231 "traddr": "10.0.0.3", 00:14:37.231 "adrfam": "ipv4", 00:14:37.231 "trsvcid": "4420", 00:14:37.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:37.231 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:37.231 "prchk_reftag": false, 00:14:37.231 "prchk_guard": false, 00:14:37.231 "hdgst": false, 00:14:37.231 "ddgst": false, 00:14:37.231 "psk": "key0", 00:14:37.231 "allow_unrecognized_csi": false, 00:14:37.231 "method": "bdev_nvme_attach_controller", 00:14:37.231 "req_id": 1 00:14:37.231 } 00:14:37.231 Got JSON-RPC error response 00:14:37.231 response: 00:14:37.231 { 00:14:37.231 "code": -5, 00:14:37.231 "message": "Input/output error" 00:14:37.231 } 00:14:37.498 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83534 00:14:37.498 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83534 ']' 00:14:37.498 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83534 00:14:37.498 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:37.498 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:37.498 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83534 00:14:37.498 killing process with pid 83534 00:14:37.498 Received shutdown signal, test time was about 10.000000 seconds 00:14:37.498 00:14:37.498 Latency(us) 00:14:37.498 [2024-12-08T18:31:55.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.498 [2024-12-08T18:31:55.428Z] =================================================================================================================== 00:14:37.498 [2024-12-08T18:31:55.428Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:37.498 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:37.498 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:37.498 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83534' 00:14:37.498 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83534 00:14:37.498 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83534 00:14:37.759 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:37.759 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:37.759 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:37.759 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:37.759 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:37.759 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1xlFg3faTh 00:14:37.759 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:37.759 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1xlFg3faTh 00:14:37.759 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:37.759 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:37.759 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:37.759 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:37.759 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1xlFg3faTh 00:14:37.759 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:37.759 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:37.759 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:37.759 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.1xlFg3faTh 00:14:37.759 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:37.759 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83568 00:14:37.759 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:37.759 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83568 /var/tmp/bdevperf.sock 00:14:37.759 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:37.759 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83568 ']' 00:14:37.759 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:37.759 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:37.759 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:37.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:37.759 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:37.759 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:37.759 [2024-12-08 18:31:55.497569] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:37.760 [2024-12-08 18:31:55.497694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83568 ] 00:14:37.760 [2024-12-08 18:31:55.636808] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.018 [2024-12-08 18:31:55.747750] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.018 [2024-12-08 18:31:55.811238] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:38.018 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:38.018 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:38.018 18:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1xlFg3faTh 00:14:38.276 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:38.842 [2024-12-08 18:31:56.469873] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:38.842 [2024-12-08 18:31:56.481238] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:38.842 [2024-12-08 18:31:56.481611] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:38.842 [2024-12-08 18:31:56.481885] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:38.842 [2024-12-08 18:31:56.482202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1883d30 (107): Transport endpoint is not connected 00:14:38.842 [2024-12-08 18:31:56.483179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1883d30 (9): Bad file descriptor 00:14:38.842 [2024-12-08 18:31:56.484175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:38.842 [2024-12-08 18:31:56.484509] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:38.842 [2024-12-08 18:31:56.484527] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:14:38.842 [2024-12-08 18:31:56.484539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:38.842 request: 00:14:38.842 { 00:14:38.842 "name": "TLSTEST", 00:14:38.842 "trtype": "tcp", 00:14:38.842 "traddr": "10.0.0.3", 00:14:38.842 "adrfam": "ipv4", 00:14:38.842 "trsvcid": "4420", 00:14:38.842 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:38.842 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:38.842 "prchk_reftag": false, 00:14:38.842 "prchk_guard": false, 00:14:38.842 "hdgst": false, 00:14:38.842 "ddgst": false, 00:14:38.842 "psk": "key0", 00:14:38.842 "allow_unrecognized_csi": false, 00:14:38.842 "method": "bdev_nvme_attach_controller", 00:14:38.842 "req_id": 1 00:14:38.842 } 00:14:38.842 Got JSON-RPC error response 00:14:38.842 response: 00:14:38.842 { 00:14:38.842 "code": -5, 00:14:38.842 "message": "Input/output error" 00:14:38.842 } 00:14:38.842 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83568 00:14:38.842 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83568 ']' 00:14:38.842 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83568 00:14:38.842 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:38.842 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:38.842 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83568 00:14:38.842 killing process with pid 83568 00:14:38.842 Received shutdown signal, test time was about 10.000000 seconds 00:14:38.842 00:14:38.842 Latency(us) 00:14:38.842 [2024-12-08T18:31:56.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.842 [2024-12-08T18:31:56.772Z] =================================================================================================================== 00:14:38.842 [2024-12-08T18:31:56.772Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:38.842 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:38.842 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:38.842 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83568' 00:14:38.842 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83568 00:14:38.842 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83568 00:14:38.842 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:38.842 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:38.842 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:38.842 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:38.842 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:38.843 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:38.843 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:38.843 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:38.843 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:38.843 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.843 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:38.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:38.843 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.843 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:38.843 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:38.843 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:38.843 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:38.843 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:38.843 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:38.843 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83589 00:14:38.843 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:38.843 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:38.843 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83589 /var/tmp/bdevperf.sock 00:14:38.843 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83589 ']' 00:14:38.843 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:38.843 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:38.843 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:38.843 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:38.843 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.100 [2024-12-08 18:31:56.776619] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:39.100 [2024-12-08 18:31:56.776896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83589 ] 00:14:39.100 [2024-12-08 18:31:56.910908] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.100 [2024-12-08 18:31:57.005502] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:39.358 [2024-12-08 18:31:57.059232] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:39.358 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:39.358 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:39.358 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:14:39.616 [2024-12-08 18:31:57.399188] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:14:39.616 [2024-12-08 18:31:57.399572] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:39.616 request: 00:14:39.616 { 00:14:39.616 "name": "key0", 00:14:39.616 "path": "", 00:14:39.616 "method": "keyring_file_add_key", 00:14:39.616 "req_id": 1 00:14:39.616 } 00:14:39.616 Got JSON-RPC error response 00:14:39.616 response: 00:14:39.616 { 00:14:39.616 "code": -1, 00:14:39.616 "message": "Operation not permitted" 00:14:39.616 } 00:14:39.616 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:39.876 [2024-12-08 18:31:57.626945] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:39.876 [2024-12-08 18:31:57.627584] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:39.876 request: 00:14:39.876 { 00:14:39.876 "name": "TLSTEST", 00:14:39.876 "trtype": "tcp", 00:14:39.876 "traddr": "10.0.0.3", 00:14:39.876 "adrfam": "ipv4", 00:14:39.876 "trsvcid": "4420", 00:14:39.876 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:39.876 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:39.876 "prchk_reftag": false, 00:14:39.876 "prchk_guard": false, 00:14:39.876 "hdgst": false, 00:14:39.876 "ddgst": false, 00:14:39.876 "psk": "key0", 00:14:39.876 "allow_unrecognized_csi": false, 00:14:39.876 "method": "bdev_nvme_attach_controller", 00:14:39.876 "req_id": 1 00:14:39.876 } 00:14:39.876 Got JSON-RPC error response 00:14:39.876 response: 00:14:39.876 { 00:14:39.876 "code": -126, 00:14:39.876 "message": "Required key not available" 00:14:39.876 } 00:14:39.876 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83589 00:14:39.876 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83589 ']' 00:14:39.876 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83589 00:14:39.876 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:39.876 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:39.876 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83589 00:14:39.876 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:39.876 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:39.876 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83589' 00:14:39.876 killing process with pid 83589 00:14:39.876 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83589 00:14:39.876 Received shutdown signal, test time was about 10.000000 seconds 00:14:39.876 00:14:39.876 Latency(us) 00:14:39.876 [2024-12-08T18:31:57.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.876 [2024-12-08T18:31:57.806Z] =================================================================================================================== 00:14:39.876 [2024-12-08T18:31:57.806Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:39.876 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83589 00:14:40.136 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:40.136 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:40.136 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:40.136 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:40.136 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:40.136 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 83137 00:14:40.136 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83137 ']' 00:14:40.136 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83137 00:14:40.136 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:40.136 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:40.136 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83137 00:14:40.136 killing process with pid 83137 00:14:40.136 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:40.136 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:40.136 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83137' 00:14:40.136 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83137 00:14:40.136 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83137 00:14:40.397 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:40.397 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:40.397 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:14:40.397 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:14:40.397 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:40.397 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:14:40.397 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:14:40.397 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:40.397 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:14:40.397 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.BIPDQsMCYr 00:14:40.397 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:40.397 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.BIPDQsMCYr 00:14:40.397 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:14:40.397 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:40.397 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:40.397 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.397 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=83626 00:14:40.397 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 83626 00:14:40.397 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:40.397 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83626 ']' 00:14:40.397 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.397 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:40.397 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.397 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:40.397 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.397 [2024-12-08 18:31:58.310237] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:40.397 [2024-12-08 18:31:58.310579] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.657 [2024-12-08 18:31:58.441032] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.657 [2024-12-08 18:31:58.539296] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.657 [2024-12-08 18:31:58.539713] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.657 [2024-12-08 18:31:58.539825] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.657 [2024-12-08 18:31:58.539838] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.657 [2024-12-08 18:31:58.539845] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.657 [2024-12-08 18:31:58.539887] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.917 [2024-12-08 18:31:58.610930] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:41.485 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:41.485 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:41.485 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:41.485 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:41.485 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:41.485 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.485 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.BIPDQsMCYr 00:14:41.485 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BIPDQsMCYr 00:14:41.485 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:41.745 [2024-12-08 18:31:59.651813] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.745 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:42.313 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:42.313 [2024-12-08 18:32:00.188078] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:42.313 [2024-12-08 18:32:00.188385] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:42.313 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:42.881 malloc0 00:14:42.881 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:43.139 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BIPDQsMCYr 00:14:43.397 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:43.656 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BIPDQsMCYr 00:14:43.656 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:43.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:43.656 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:43.656 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:43.656 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BIPDQsMCYr 00:14:43.656 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:43.656 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83687 00:14:43.656 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:43.656 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:43.656 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83687 /var/tmp/bdevperf.sock 00:14:43.656 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83687 ']' 00:14:43.656 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:43.656 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:43.656 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:43.656 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:43.656 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:43.914 [2024-12-08 18:32:01.594830] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:43.914 [2024-12-08 18:32:01.594966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83687 ] 00:14:43.914 [2024-12-08 18:32:01.731027] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.914 [2024-12-08 18:32:01.831891] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:44.171 [2024-12-08 18:32:01.888035] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:44.736 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:44.736 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:44.736 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BIPDQsMCYr 00:14:45.353 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:45.611 [2024-12-08 18:32:03.337011] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:45.611 TLSTESTn1 00:14:45.611 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:45.611 Running I/O for 10 seconds... 00:14:47.922 4286.00 IOPS, 16.74 MiB/s [2024-12-08T18:32:06.808Z] 4409.50 IOPS, 17.22 MiB/s [2024-12-08T18:32:07.744Z] 4464.67 IOPS, 17.44 MiB/s [2024-12-08T18:32:08.680Z] 4491.75 IOPS, 17.55 MiB/s [2024-12-08T18:32:09.617Z] 4506.00 IOPS, 17.60 MiB/s [2024-12-08T18:32:10.554Z] 4515.33 IOPS, 17.64 MiB/s [2024-12-08T18:32:11.931Z] 4521.14 IOPS, 17.66 MiB/s [2024-12-08T18:32:12.869Z] 4526.38 IOPS, 17.68 MiB/s [2024-12-08T18:32:13.807Z] 4529.33 IOPS, 17.69 MiB/s [2024-12-08T18:32:13.807Z] 4532.70 IOPS, 17.71 MiB/s 00:14:55.877 Latency(us) 00:14:55.877 [2024-12-08T18:32:13.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.877 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:55.877 Verification LBA range: start 0x0 length 0x2000 00:14:55.877 TLSTESTn1 : 10.01 4538.75 17.73 0.00 0.00 28155.36 4289.63 22997.18 00:14:55.877 [2024-12-08T18:32:13.807Z] =================================================================================================================== 00:14:55.877 [2024-12-08T18:32:13.807Z] Total : 4538.75 17.73 0.00 0.00 28155.36 4289.63 22997.18 00:14:55.877 { 00:14:55.877 "results": [ 00:14:55.877 { 00:14:55.877 "job": "TLSTESTn1", 00:14:55.877 "core_mask": "0x4", 00:14:55.877 "workload": "verify", 00:14:55.877 "status": "finished", 00:14:55.877 "verify_range": { 00:14:55.877 "start": 0, 00:14:55.877 "length": 8192 00:14:55.877 }, 00:14:55.877 "queue_depth": 128, 00:14:55.877 "io_size": 4096, 00:14:55.877 "runtime": 10.013777, 00:14:55.877 "iops": 4538.746968301771, 00:14:55.877 "mibps": 17.729480344928792, 00:14:55.877 "io_failed": 0, 00:14:55.877 "io_timeout": 0, 00:14:55.877 "avg_latency_us": 28155.358849804983, 00:14:55.877 "min_latency_us": 4289.629090909091, 00:14:55.877 "max_latency_us": 22997.17818181818 00:14:55.877 } 00:14:55.877 ], 00:14:55.877 "core_count": 1 00:14:55.877 } 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83687 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83687 ']' 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83687 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83687 00:14:55.877 killing process with pid 83687 00:14:55.877 Received shutdown signal, test time was about 10.000000 seconds 00:14:55.877 00:14:55.877 Latency(us) 00:14:55.877 [2024-12-08T18:32:13.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.877 [2024-12-08T18:32:13.807Z] =================================================================================================================== 00:14:55.877 [2024-12-08T18:32:13.807Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83687' 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83687 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83687 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.BIPDQsMCYr 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BIPDQsMCYr 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BIPDQsMCYr 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BIPDQsMCYr 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BIPDQsMCYr 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83828 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83828 /var/tmp/bdevperf.sock 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83828 ']' 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:55.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:55.877 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:56.137 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.137 [2024-12-08 18:32:13.858075] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:56.137 [2024-12-08 18:32:13.858422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83828 ] 00:14:56.137 [2024-12-08 18:32:13.996254] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.137 [2024-12-08 18:32:14.061489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.397 [2024-12-08 18:32:14.115311] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:56.966 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:56.966 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:56.966 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BIPDQsMCYr 00:14:57.234 [2024-12-08 18:32:15.130260] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.BIPDQsMCYr': 0100666 00:14:57.234 [2024-12-08 18:32:15.130306] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:57.234 request: 00:14:57.234 { 00:14:57.234 "name": "key0", 00:14:57.234 "path": "/tmp/tmp.BIPDQsMCYr", 00:14:57.234 "method": "keyring_file_add_key", 00:14:57.234 "req_id": 1 00:14:57.234 } 00:14:57.234 Got JSON-RPC error response 00:14:57.234 response: 00:14:57.234 { 00:14:57.234 "code": -1, 00:14:57.234 "message": "Operation not permitted" 00:14:57.234 } 00:14:57.234 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:57.498 [2024-12-08 18:32:15.382427] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:57.498 [2024-12-08 18:32:15.382501] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:57.498 request: 00:14:57.498 { 00:14:57.498 "name": "TLSTEST", 00:14:57.498 "trtype": "tcp", 00:14:57.498 "traddr": "10.0.0.3", 00:14:57.498 "adrfam": "ipv4", 00:14:57.498 "trsvcid": "4420", 00:14:57.498 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:57.498 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:57.498 "prchk_reftag": false, 00:14:57.498 "prchk_guard": false, 00:14:57.498 "hdgst": false, 00:14:57.498 "ddgst": false, 00:14:57.498 "psk": "key0", 00:14:57.498 "allow_unrecognized_csi": false, 00:14:57.498 "method": "bdev_nvme_attach_controller", 00:14:57.498 "req_id": 1 00:14:57.498 } 00:14:57.498 Got JSON-RPC error response 00:14:57.498 response: 00:14:57.498 { 00:14:57.498 "code": -126, 00:14:57.498 "message": "Required key not available" 00:14:57.498 } 00:14:57.498 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83828 00:14:57.498 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83828 ']' 00:14:57.498 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83828 00:14:57.498 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:57.498 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:57.498 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83828 00:14:57.498 killing process with pid 83828 00:14:57.498 Received shutdown signal, test time was about 10.000000 seconds 00:14:57.498 00:14:57.499 Latency(us) 00:14:57.499 [2024-12-08T18:32:15.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.499 [2024-12-08T18:32:15.429Z] =================================================================================================================== 00:14:57.499 [2024-12-08T18:32:15.429Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:57.499 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:57.499 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:57.499 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83828' 00:14:57.499 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83828 00:14:57.499 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83828 00:14:57.757 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:57.757 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:57.757 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:57.757 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:57.757 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:57.757 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 83626 00:14:57.757 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83626 ']' 00:14:57.757 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83626 00:14:57.757 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:57.757 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:57.757 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83626 00:14:57.757 killing process with pid 83626 00:14:57.757 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:57.757 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:57.757 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83626' 00:14:57.757 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83626 00:14:57.757 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83626 00:14:58.324 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:14:58.324 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:58.324 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:58.324 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:58.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.324 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=83867 00:14:58.324 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:58.324 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 83867 00:14:58.324 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83867 ']' 00:14:58.324 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.324 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:58.324 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.324 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:58.324 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:58.324 [2024-12-08 18:32:16.044319] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:58.324 [2024-12-08 18:32:16.044440] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.324 [2024-12-08 18:32:16.183767] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.583 [2024-12-08 18:32:16.277184] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:58.583 [2024-12-08 18:32:16.277251] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:58.584 [2024-12-08 18:32:16.277264] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:58.584 [2024-12-08 18:32:16.277272] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:58.584 [2024-12-08 18:32:16.277280] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:58.584 [2024-12-08 18:32:16.277313] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.584 [2024-12-08 18:32:16.363165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:58.584 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:58.584 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:58.584 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:58.584 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:58.584 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:58.584 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.584 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.BIPDQsMCYr 00:14:58.584 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:58.584 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.BIPDQsMCYr 00:14:58.584 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:14:58.584 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:58.584 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:14:58.584 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:58.584 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.BIPDQsMCYr 00:14:58.584 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BIPDQsMCYr 00:14:58.584 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:58.842 [2024-12-08 18:32:16.725757] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.842 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:59.411 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:59.411 [2024-12-08 18:32:17.301966] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:59.411 [2024-12-08 18:32:17.302353] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:59.411 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:59.979 malloc0 00:14:59.979 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:00.238 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BIPDQsMCYr 00:15:00.239 [2024-12-08 18:32:18.167539] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.BIPDQsMCYr': 0100666 00:15:00.239 [2024-12-08 18:32:18.167605] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:00.498 request: 00:15:00.498 { 00:15:00.498 "name": "key0", 00:15:00.498 "path": "/tmp/tmp.BIPDQsMCYr", 00:15:00.498 "method": "keyring_file_add_key", 00:15:00.498 "req_id": 1 00:15:00.498 } 00:15:00.498 Got JSON-RPC error response 00:15:00.498 response: 00:15:00.498 { 00:15:00.498 "code": -1, 00:15:00.498 "message": "Operation not permitted" 00:15:00.498 } 00:15:00.498 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:00.758 [2024-12-08 18:32:18.435664] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:15:00.758 [2024-12-08 18:32:18.435743] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:00.758 request: 00:15:00.758 { 00:15:00.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.758 "host": "nqn.2016-06.io.spdk:host1", 00:15:00.758 "psk": "key0", 00:15:00.758 "method": "nvmf_subsystem_add_host", 00:15:00.758 "req_id": 1 00:15:00.758 } 00:15:00.758 Got JSON-RPC error response 00:15:00.758 response: 00:15:00.758 { 00:15:00.758 "code": -32603, 00:15:00.758 "message": "Internal error" 00:15:00.758 } 00:15:00.758 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:00.758 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:00.758 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:00.758 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:00.758 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 83867 00:15:00.758 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83867 ']' 00:15:00.758 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83867 00:15:00.758 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:00.758 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:00.758 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83867 00:15:00.758 killing process with pid 83867 00:15:00.758 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:00.758 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:00.758 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83867' 00:15:00.758 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83867 00:15:00.758 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83867 00:15:01.017 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.BIPDQsMCYr 00:15:01.018 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:15:01.018 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:01.018 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:01.018 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:01.018 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=83936 00:15:01.018 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 83936 00:15:01.018 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:01.018 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83936 ']' 00:15:01.018 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.018 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:01.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.018 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.018 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:01.018 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:01.018 [2024-12-08 18:32:18.905289] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:01.018 [2024-12-08 18:32:18.905560] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.277 [2024-12-08 18:32:19.046305] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.277 [2024-12-08 18:32:19.149484] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.277 [2024-12-08 18:32:19.149610] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.277 [2024-12-08 18:32:19.149622] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.277 [2024-12-08 18:32:19.149630] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.277 [2024-12-08 18:32:19.149636] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.277 [2024-12-08 18:32:19.149674] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.536 [2024-12-08 18:32:19.236628] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:02.105 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:02.105 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:02.105 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:02.105 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:02.105 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.105 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.105 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.BIPDQsMCYr 00:15:02.105 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BIPDQsMCYr 00:15:02.105 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:02.363 [2024-12-08 18:32:20.261684] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.363 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:02.941 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:02.941 [2024-12-08 18:32:20.813937] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:02.941 [2024-12-08 18:32:20.814376] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:02.941 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:03.200 malloc0 00:15:03.200 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:03.766 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BIPDQsMCYr 00:15:03.767 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:04.026 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=83993 00:15:04.026 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:04.026 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:04.026 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 83993 /var/tmp/bdevperf.sock 00:15:04.026 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83993 ']' 00:15:04.026 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:04.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:04.026 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:04.026 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:04.026 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:04.026 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:04.284 [2024-12-08 18:32:21.996278] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:04.284 [2024-12-08 18:32:21.996391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83993 ] 00:15:04.284 [2024-12-08 18:32:22.133727] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.543 [2024-12-08 18:32:22.226233] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:04.543 [2024-12-08 18:32:22.288538] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:04.543 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:04.543 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:04.543 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BIPDQsMCYr 00:15:04.802 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:05.061 [2024-12-08 18:32:22.854182] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:05.061 TLSTESTn1 00:15:05.061 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:05.631 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:15:05.631 "subsystems": [ 00:15:05.631 { 00:15:05.631 "subsystem": "keyring", 00:15:05.631 "config": [ 00:15:05.631 { 00:15:05.631 "method": "keyring_file_add_key", 00:15:05.631 "params": { 00:15:05.631 "name": "key0", 00:15:05.631 "path": "/tmp/tmp.BIPDQsMCYr" 00:15:05.631 } 00:15:05.631 } 00:15:05.631 ] 00:15:05.631 }, 00:15:05.631 { 00:15:05.631 "subsystem": "iobuf", 00:15:05.631 "config": [ 00:15:05.631 { 00:15:05.631 "method": "iobuf_set_options", 00:15:05.631 "params": { 00:15:05.631 "small_pool_count": 8192, 00:15:05.631 "large_pool_count": 1024, 00:15:05.631 "small_bufsize": 8192, 00:15:05.631 "large_bufsize": 135168 00:15:05.631 } 00:15:05.631 } 00:15:05.631 ] 00:15:05.631 }, 00:15:05.631 { 00:15:05.631 "subsystem": "sock", 00:15:05.631 "config": [ 00:15:05.631 { 00:15:05.631 "method": "sock_set_default_impl", 00:15:05.631 "params": { 00:15:05.631 "impl_name": "uring" 00:15:05.631 } 00:15:05.631 }, 00:15:05.631 { 00:15:05.631 "method": "sock_impl_set_options", 00:15:05.631 "params": { 00:15:05.631 "impl_name": "ssl", 00:15:05.631 "recv_buf_size": 4096, 00:15:05.631 "send_buf_size": 4096, 00:15:05.631 "enable_recv_pipe": true, 00:15:05.631 "enable_quickack": false, 00:15:05.631 "enable_placement_id": 0, 00:15:05.631 "enable_zerocopy_send_server": true, 00:15:05.631 "enable_zerocopy_send_client": false, 00:15:05.631 "zerocopy_threshold": 0, 00:15:05.631 "tls_version": 0, 00:15:05.631 "enable_ktls": false 00:15:05.631 } 00:15:05.631 }, 00:15:05.631 { 00:15:05.631 "method": "sock_impl_set_options", 00:15:05.631 "params": { 00:15:05.631 "impl_name": "posix", 00:15:05.631 "recv_buf_size": 2097152, 00:15:05.631 "send_buf_size": 2097152, 00:15:05.631 "enable_recv_pipe": true, 00:15:05.631 "enable_quickack": false, 00:15:05.631 "enable_placement_id": 0, 00:15:05.631 "enable_zerocopy_send_server": true, 00:15:05.631 "enable_zerocopy_send_client": false, 00:15:05.631 "zerocopy_threshold": 0, 00:15:05.631 "tls_version": 0, 00:15:05.631 "enable_ktls": false 00:15:05.631 } 00:15:05.631 }, 00:15:05.631 { 00:15:05.631 "method": "sock_impl_set_options", 00:15:05.631 "params": { 00:15:05.631 "impl_name": "uring", 00:15:05.631 "recv_buf_size": 2097152, 00:15:05.631 "send_buf_size": 2097152, 00:15:05.631 "enable_recv_pipe": true, 00:15:05.631 "enable_quickack": false, 00:15:05.631 "enable_placement_id": 0, 00:15:05.631 "enable_zerocopy_send_server": false, 00:15:05.631 "enable_zerocopy_send_client": false, 00:15:05.631 "zerocopy_threshold": 0, 00:15:05.631 "tls_version": 0, 00:15:05.631 "enable_ktls": false 00:15:05.631 } 00:15:05.631 } 00:15:05.631 ] 00:15:05.631 }, 00:15:05.631 { 00:15:05.631 "subsystem": "vmd", 00:15:05.631 "config": [] 00:15:05.631 }, 00:15:05.631 { 00:15:05.631 "subsystem": "accel", 00:15:05.631 "config": [ 00:15:05.631 { 00:15:05.631 "method": "accel_set_options", 00:15:05.631 "params": { 00:15:05.631 "small_cache_size": 128, 00:15:05.631 "large_cache_size": 16, 00:15:05.631 "task_count": 2048, 00:15:05.631 "sequence_count": 2048, 00:15:05.631 "buf_count": 2048 00:15:05.631 } 00:15:05.631 } 00:15:05.631 ] 00:15:05.631 }, 00:15:05.631 { 00:15:05.631 "subsystem": "bdev", 00:15:05.631 "config": [ 00:15:05.631 { 00:15:05.631 "method": "bdev_set_options", 00:15:05.631 "params": { 00:15:05.631 "bdev_io_pool_size": 65535, 00:15:05.631 "bdev_io_cache_size": 256, 00:15:05.631 "bdev_auto_examine": true, 00:15:05.631 "iobuf_small_cache_size": 128, 00:15:05.631 "iobuf_large_cache_size": 16 00:15:05.631 } 00:15:05.631 }, 00:15:05.631 { 00:15:05.631 "method": "bdev_raid_set_options", 00:15:05.631 "params": { 00:15:05.631 "process_window_size_kb": 1024, 00:15:05.631 "process_max_bandwidth_mb_sec": 0 00:15:05.631 } 00:15:05.631 }, 00:15:05.631 { 00:15:05.631 "method": "bdev_iscsi_set_options", 00:15:05.631 "params": { 00:15:05.631 "timeout_sec": 30 00:15:05.631 } 00:15:05.631 }, 00:15:05.631 { 00:15:05.631 "method": "bdev_nvme_set_options", 00:15:05.631 "params": { 00:15:05.631 "action_on_timeout": "none", 00:15:05.631 "timeout_us": 0, 00:15:05.631 "timeout_admin_us": 0, 00:15:05.631 "keep_alive_timeout_ms": 10000, 00:15:05.631 "arbitration_burst": 0, 00:15:05.631 "low_priority_weight": 0, 00:15:05.631 "medium_priority_weight": 0, 00:15:05.631 "high_priority_weight": 0, 00:15:05.631 "nvme_adminq_poll_period_us": 10000, 00:15:05.631 "nvme_ioq_poll_period_us": 0, 00:15:05.631 "io_queue_requests": 0, 00:15:05.631 "delay_cmd_submit": true, 00:15:05.631 "transport_retry_count": 4, 00:15:05.631 "bdev_retry_count": 3, 00:15:05.631 "transport_ack_timeout": 0, 00:15:05.631 "ctrlr_loss_timeout_sec": 0, 00:15:05.631 "reconnect_delay_sec": 0, 00:15:05.631 "fast_io_fail_timeout_sec": 0, 00:15:05.631 "disable_auto_failback": false, 00:15:05.631 "generate_uuids": false, 00:15:05.631 "transport_tos": 0, 00:15:05.631 "nvme_error_stat": false, 00:15:05.631 "rdma_srq_size": 0, 00:15:05.631 "io_path_stat": false, 00:15:05.631 "allow_accel_sequence": false, 00:15:05.631 "rdma_max_cq_size": 0, 00:15:05.631 "rdma_cm_event_timeout_ms": 0, 00:15:05.631 "dhchap_digests": [ 00:15:05.631 "sha256", 00:15:05.631 "sha384", 00:15:05.631 "sha512" 00:15:05.631 ], 00:15:05.631 "dhchap_dhgroups": [ 00:15:05.631 "null", 00:15:05.631 "ffdhe2048", 00:15:05.631 "ffdhe3072", 00:15:05.631 "ffdhe4096", 00:15:05.631 "ffdhe6144", 00:15:05.631 "ffdhe8192" 00:15:05.631 ] 00:15:05.631 } 00:15:05.631 }, 00:15:05.631 { 00:15:05.631 "method": "bdev_nvme_set_hotplug", 00:15:05.631 "params": { 00:15:05.631 "period_us": 100000, 00:15:05.631 "enable": false 00:15:05.631 } 00:15:05.631 }, 00:15:05.631 { 00:15:05.631 "method": "bdev_malloc_create", 00:15:05.631 "params": { 00:15:05.631 "name": "malloc0", 00:15:05.631 "num_blocks": 8192, 00:15:05.631 "block_size": 4096, 00:15:05.631 "physical_block_size": 4096, 00:15:05.631 "uuid": "b1b8698d-778f-4129-83ea-a3615e906768", 00:15:05.631 "optimal_io_boundary": 0, 00:15:05.631 "md_size": 0, 00:15:05.631 "dif_type": 0, 00:15:05.631 "dif_is_head_of_md": false, 00:15:05.631 "dif_pi_format": 0 00:15:05.631 } 00:15:05.631 }, 00:15:05.631 { 00:15:05.631 "method": "bdev_wait_for_examine" 00:15:05.631 } 00:15:05.631 ] 00:15:05.631 }, 00:15:05.631 { 00:15:05.631 "subsystem": "nbd", 00:15:05.631 "config": [] 00:15:05.631 }, 00:15:05.631 { 00:15:05.631 "subsystem": "scheduler", 00:15:05.631 "config": [ 00:15:05.631 { 00:15:05.631 "method": "framework_set_scheduler", 00:15:05.632 "params": { 00:15:05.632 "name": "static" 00:15:05.632 } 00:15:05.632 } 00:15:05.632 ] 00:15:05.632 }, 00:15:05.632 { 00:15:05.632 "subsystem": "nvmf", 00:15:05.632 "config": [ 00:15:05.632 { 00:15:05.632 "method": "nvmf_set_config", 00:15:05.632 "params": { 00:15:05.632 "discovery_filter": "match_any", 00:15:05.632 "admin_cmd_passthru": { 00:15:05.632 "identify_ctrlr": false 00:15:05.632 }, 00:15:05.632 "dhchap_digests": [ 00:15:05.632 "sha256", 00:15:05.632 "sha384", 00:15:05.632 "sha512" 00:15:05.632 ], 00:15:05.632 "dhchap_dhgroups": [ 00:15:05.632 "null", 00:15:05.632 "ffdhe2048", 00:15:05.632 "ffdhe3072", 00:15:05.632 "ffdhe4096", 00:15:05.632 "ffdhe6144", 00:15:05.632 "ffdhe8192" 00:15:05.632 ] 00:15:05.632 } 00:15:05.632 }, 00:15:05.632 { 00:15:05.632 "method": "nvmf_set_max_subsystems", 00:15:05.632 "params": { 00:15:05.632 "max_subsystems": 1024 00:15:05.632 } 00:15:05.632 }, 00:15:05.632 { 00:15:05.632 "method": "nvmf_set_crdt", 00:15:05.632 "params": { 00:15:05.632 "crdt1": 0, 00:15:05.632 "crdt2": 0, 00:15:05.632 "crdt3": 0 00:15:05.632 } 00:15:05.632 }, 00:15:05.632 { 00:15:05.632 "method": "nvmf_create_transport", 00:15:05.632 "params": { 00:15:05.632 "trtype": "TCP", 00:15:05.632 "max_queue_depth": 128, 00:15:05.632 "max_io_qpairs_per_ctrlr": 127, 00:15:05.632 "in_capsule_data_size": 4096, 00:15:05.632 "max_io_size": 131072, 00:15:05.632 "io_unit_size": 131072, 00:15:05.632 "max_aq_depth": 128, 00:15:05.632 "num_shared_buffers": 511, 00:15:05.632 "buf_cache_size": 4294967295, 00:15:05.632 "dif_insert_or_strip": false, 00:15:05.632 "zcopy": false, 00:15:05.632 "c2h_success": false, 00:15:05.632 "sock_priority": 0, 00:15:05.632 "abort_timeout_sec": 1, 00:15:05.632 "ack_timeout": 0, 00:15:05.632 "data_wr_pool_size": 0 00:15:05.632 } 00:15:05.632 }, 00:15:05.632 { 00:15:05.632 "method": "nvmf_create_subsystem", 00:15:05.632 "params": { 00:15:05.632 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.632 "allow_any_host": false, 00:15:05.632 "serial_number": "SPDK00000000000001", 00:15:05.632 "model_number": "SPDK bdev Controller", 00:15:05.632 "max_namespaces": 10, 00:15:05.632 "min_cntlid": 1, 00:15:05.632 "max_cntlid": 65519, 00:15:05.632 "ana_reporting": false 00:15:05.632 } 00:15:05.632 }, 00:15:05.632 { 00:15:05.632 "method": "nvmf_subsystem_add_host", 00:15:05.632 "params": { 00:15:05.632 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.632 "host": "nqn.2016-06.io.spdk:host1", 00:15:05.632 "psk": "key0" 00:15:05.632 } 00:15:05.632 }, 00:15:05.632 { 00:15:05.632 "method": "nvmf_subsystem_add_ns", 00:15:05.632 "params": { 00:15:05.632 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.632 "namespace": { 00:15:05.632 "nsid": 1, 00:15:05.632 "bdev_name": "malloc0", 00:15:05.632 "nguid": "B1B8698D778F412983EAA3615E906768", 00:15:05.632 "uuid": "b1b8698d-778f-4129-83ea-a3615e906768", 00:15:05.632 "no_auto_visible": false 00:15:05.632 } 00:15:05.632 } 00:15:05.632 }, 00:15:05.632 { 00:15:05.632 "method": "nvmf_subsystem_add_listener", 00:15:05.632 "params": { 00:15:05.632 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.632 "listen_address": { 00:15:05.632 "trtype": "TCP", 00:15:05.632 "adrfam": "IPv4", 00:15:05.632 "traddr": "10.0.0.3", 00:15:05.632 "trsvcid": "4420" 00:15:05.632 }, 00:15:05.632 "secure_channel": true 00:15:05.632 } 00:15:05.632 } 00:15:05.632 ] 00:15:05.632 } 00:15:05.632 ] 00:15:05.632 }' 00:15:05.632 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:05.893 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:15:05.893 "subsystems": [ 00:15:05.893 { 00:15:05.893 "subsystem": "keyring", 00:15:05.893 "config": [ 00:15:05.893 { 00:15:05.893 "method": "keyring_file_add_key", 00:15:05.893 "params": { 00:15:05.893 "name": "key0", 00:15:05.893 "path": "/tmp/tmp.BIPDQsMCYr" 00:15:05.893 } 00:15:05.893 } 00:15:05.893 ] 00:15:05.893 }, 00:15:05.893 { 00:15:05.893 "subsystem": "iobuf", 00:15:05.893 "config": [ 00:15:05.893 { 00:15:05.893 "method": "iobuf_set_options", 00:15:05.893 "params": { 00:15:05.893 "small_pool_count": 8192, 00:15:05.893 "large_pool_count": 1024, 00:15:05.893 "small_bufsize": 8192, 00:15:05.893 "large_bufsize": 135168 00:15:05.893 } 00:15:05.893 } 00:15:05.893 ] 00:15:05.893 }, 00:15:05.893 { 00:15:05.893 "subsystem": "sock", 00:15:05.893 "config": [ 00:15:05.893 { 00:15:05.893 "method": "sock_set_default_impl", 00:15:05.893 "params": { 00:15:05.893 "impl_name": "uring" 00:15:05.893 } 00:15:05.893 }, 00:15:05.893 { 00:15:05.893 "method": "sock_impl_set_options", 00:15:05.893 "params": { 00:15:05.893 "impl_name": "ssl", 00:15:05.893 "recv_buf_size": 4096, 00:15:05.893 "send_buf_size": 4096, 00:15:05.893 "enable_recv_pipe": true, 00:15:05.893 "enable_quickack": false, 00:15:05.893 "enable_placement_id": 0, 00:15:05.893 "enable_zerocopy_send_server": true, 00:15:05.893 "enable_zerocopy_send_client": false, 00:15:05.893 "zerocopy_threshold": 0, 00:15:05.893 "tls_version": 0, 00:15:05.893 "enable_ktls": false 00:15:05.893 } 00:15:05.893 }, 00:15:05.893 { 00:15:05.893 "method": "sock_impl_set_options", 00:15:05.893 "params": { 00:15:05.893 "impl_name": "posix", 00:15:05.893 "recv_buf_size": 2097152, 00:15:05.893 "send_buf_size": 2097152, 00:15:05.893 "enable_recv_pipe": true, 00:15:05.893 "enable_quickack": false, 00:15:05.893 "enable_placement_id": 0, 00:15:05.893 "enable_zerocopy_send_server": true, 00:15:05.893 "enable_zerocopy_send_client": false, 00:15:05.893 "zerocopy_threshold": 0, 00:15:05.893 "tls_version": 0, 00:15:05.893 "enable_ktls": false 00:15:05.893 } 00:15:05.893 }, 00:15:05.893 { 00:15:05.893 "method": "sock_impl_set_options", 00:15:05.893 "params": { 00:15:05.893 "impl_name": "uring", 00:15:05.893 "recv_buf_size": 2097152, 00:15:05.893 "send_buf_size": 2097152, 00:15:05.893 "enable_recv_pipe": true, 00:15:05.893 "enable_quickack": false, 00:15:05.893 "enable_placement_id": 0, 00:15:05.893 "enable_zerocopy_send_server": false, 00:15:05.893 "enable_zerocopy_send_client": false, 00:15:05.893 "zerocopy_threshold": 0, 00:15:05.893 "tls_version": 0, 00:15:05.893 "enable_ktls": false 00:15:05.893 } 00:15:05.893 } 00:15:05.893 ] 00:15:05.893 }, 00:15:05.893 { 00:15:05.893 "subsystem": "vmd", 00:15:05.893 "config": [] 00:15:05.893 }, 00:15:05.893 { 00:15:05.893 "subsystem": "accel", 00:15:05.893 "config": [ 00:15:05.893 { 00:15:05.893 "method": "accel_set_options", 00:15:05.893 "params": { 00:15:05.893 "small_cache_size": 128, 00:15:05.893 "large_cache_size": 16, 00:15:05.893 "task_count": 2048, 00:15:05.893 "sequence_count": 2048, 00:15:05.893 "buf_count": 2048 00:15:05.893 } 00:15:05.893 } 00:15:05.893 ] 00:15:05.893 }, 00:15:05.893 { 00:15:05.893 "subsystem": "bdev", 00:15:05.893 "config": [ 00:15:05.893 { 00:15:05.893 "method": "bdev_set_options", 00:15:05.893 "params": { 00:15:05.893 "bdev_io_pool_size": 65535, 00:15:05.893 "bdev_io_cache_size": 256, 00:15:05.893 "bdev_auto_examine": true, 00:15:05.893 "iobuf_small_cache_size": 128, 00:15:05.893 "iobuf_large_cache_size": 16 00:15:05.893 } 00:15:05.893 }, 00:15:05.893 { 00:15:05.893 "method": "bdev_raid_set_options", 00:15:05.893 "params": { 00:15:05.893 "process_window_size_kb": 1024, 00:15:05.893 "process_max_bandwidth_mb_sec": 0 00:15:05.893 } 00:15:05.893 }, 00:15:05.893 { 00:15:05.893 "method": "bdev_iscsi_set_options", 00:15:05.893 "params": { 00:15:05.893 "timeout_sec": 30 00:15:05.893 } 00:15:05.893 }, 00:15:05.893 { 00:15:05.893 "method": "bdev_nvme_set_options", 00:15:05.893 "params": { 00:15:05.893 "action_on_timeout": "none", 00:15:05.893 "timeout_us": 0, 00:15:05.893 "timeout_admin_us": 0, 00:15:05.893 "keep_alive_timeout_ms": 10000, 00:15:05.893 "arbitration_burst": 0, 00:15:05.893 "low_priority_weight": 0, 00:15:05.893 "medium_priority_weight": 0, 00:15:05.893 "high_priority_weight": 0, 00:15:05.893 "nvme_adminq_poll_period_us": 10000, 00:15:05.916 "nvme_ioq_poll_period_us": 0, 00:15:05.916 "io_queue_requests": 512, 00:15:05.916 "delay_cmd_submit": true, 00:15:05.916 "transport_retry_count": 4, 00:15:05.916 "bdev_retry_count": 3, 00:15:05.916 "transport_ack_timeout": 0, 00:15:05.916 "ctrlr_loss_timeout_sec": 0, 00:15:05.916 "reconnect_delay_sec": 0, 00:15:05.916 "fast_io_fail_timeout_sec": 0, 00:15:05.916 "disable_auto_failback": false, 00:15:05.916 "generate_uuids": false, 00:15:05.916 "transport_tos": 0, 00:15:05.916 "nvme_error_stat": false, 00:15:05.916 "rdma_srq_size": 0, 00:15:05.916 "io_path_stat": false, 00:15:05.916 "allow_accel_sequence": false, 00:15:05.916 "rdma_max_cq_size": 0, 00:15:05.916 "rdma_cm_event_timeout_ms": 0, 00:15:05.916 "dhchap_digests": [ 00:15:05.916 "sha256", 00:15:05.916 "sha384", 00:15:05.916 "sha512" 00:15:05.916 ], 00:15:05.916 "dhchap_dhgroups": [ 00:15:05.916 "null", 00:15:05.916 "ffdhe2048", 00:15:05.916 "ffdhe3072", 00:15:05.916 "ffdhe4096", 00:15:05.916 "ffdhe6144", 00:15:05.916 "ffdhe8192" 00:15:05.916 ] 00:15:05.916 } 00:15:05.916 }, 00:15:05.916 { 00:15:05.916 "method": "bdev_nvme_attach_controller", 00:15:05.916 "params": { 00:15:05.916 "name": "TLSTEST", 00:15:05.916 "trtype": "TCP", 00:15:05.916 "adrfam": "IPv4", 00:15:05.916 "traddr": "10.0.0.3", 00:15:05.916 "trsvcid": "4420", 00:15:05.916 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.916 "prchk_reftag": false, 00:15:05.916 "prchk_guard": false, 00:15:05.916 "ctrlr_loss_timeout_sec": 0, 00:15:05.916 "reconnect_delay_sec": 0, 00:15:05.916 "fast_io_fail_timeout_sec": 0, 00:15:05.916 "psk": "key0", 00:15:05.917 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:05.917 "hdgst": false, 00:15:05.917 "ddgst": false 00:15:05.917 } 00:15:05.917 }, 00:15:05.917 { 00:15:05.917 "method": "bdev_nvme_set_hotplug", 00:15:05.917 "params": { 00:15:05.917 "period_us": 100000, 00:15:05.917 "enable": false 00:15:05.917 } 00:15:05.917 }, 00:15:05.917 { 00:15:05.917 "method": "bdev_wait_for_examine" 00:15:05.917 } 00:15:05.917 ] 00:15:05.917 }, 00:15:05.917 { 00:15:05.917 "subsystem": "nbd", 00:15:05.917 "config": [] 00:15:05.917 } 00:15:05.917 ] 00:15:05.917 }' 00:15:05.917 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 83993 00:15:05.917 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83993 ']' 00:15:05.917 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83993 00:15:05.917 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:05.917 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:05.917 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83993 00:15:05.917 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:05.917 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:05.917 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83993' 00:15:05.917 killing process with pid 83993 00:15:05.917 Received shutdown signal, test time was about 10.000000 seconds 00:15:05.917 00:15:05.917 Latency(us) 00:15:05.917 [2024-12-08T18:32:23.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.917 [2024-12-08T18:32:23.847Z] =================================================================================================================== 00:15:05.917 [2024-12-08T18:32:23.847Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:05.917 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83993 00:15:05.917 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83993 00:15:06.177 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 83936 00:15:06.177 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83936 ']' 00:15:06.177 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83936 00:15:06.177 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:06.177 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:06.177 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83936 00:15:06.177 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:06.177 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:06.177 killing process with pid 83936 00:15:06.177 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83936' 00:15:06.177 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83936 00:15:06.177 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83936 00:15:06.437 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:06.437 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:06.437 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:06.437 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:15:06.437 "subsystems": [ 00:15:06.437 { 00:15:06.437 "subsystem": "keyring", 00:15:06.437 "config": [ 00:15:06.437 { 00:15:06.437 "method": "keyring_file_add_key", 00:15:06.437 "params": { 00:15:06.437 "name": "key0", 00:15:06.437 "path": "/tmp/tmp.BIPDQsMCYr" 00:15:06.437 } 00:15:06.437 } 00:15:06.437 ] 00:15:06.437 }, 00:15:06.437 { 00:15:06.437 "subsystem": "iobuf", 00:15:06.437 "config": [ 00:15:06.437 { 00:15:06.437 "method": "iobuf_set_options", 00:15:06.437 "params": { 00:15:06.437 "small_pool_count": 8192, 00:15:06.437 "large_pool_count": 1024, 00:15:06.438 "small_bufsize": 8192, 00:15:06.438 "large_bufsize": 135168 00:15:06.438 } 00:15:06.438 } 00:15:06.438 ] 00:15:06.438 }, 00:15:06.438 { 00:15:06.438 "subsystem": "sock", 00:15:06.438 "config": [ 00:15:06.438 { 00:15:06.438 "method": "sock_set_default_impl", 00:15:06.438 "params": { 00:15:06.438 "impl_name": "uring" 00:15:06.438 } 00:15:06.438 }, 00:15:06.438 { 00:15:06.438 "method": "sock_impl_set_options", 00:15:06.438 "params": { 00:15:06.438 "impl_name": "ssl", 00:15:06.438 "recv_buf_size": 4096, 00:15:06.438 "send_buf_size": 4096, 00:15:06.438 "enable_recv_pipe": true, 00:15:06.438 "enable_quickack": false, 00:15:06.438 "enable_placement_id": 0, 00:15:06.438 "enable_zerocopy_send_server": true, 00:15:06.438 "enable_zerocopy_send_client": false, 00:15:06.438 "zerocopy_threshold": 0, 00:15:06.438 "tls_version": 0, 00:15:06.438 "enable_ktls": false 00:15:06.438 } 00:15:06.438 }, 00:15:06.438 { 00:15:06.438 "method": "sock_impl_set_options", 00:15:06.438 "params": { 00:15:06.438 "impl_name": "posix", 00:15:06.438 "recv_buf_size": 2097152, 00:15:06.438 "send_buf_size": 2097152, 00:15:06.438 "enable_recv_pipe": true, 00:15:06.438 "enable_quickack": false, 00:15:06.438 "enable_placement_id": 0, 00:15:06.438 "enable_zerocopy_send_server": true, 00:15:06.438 "enable_zerocopy_send_client": false, 00:15:06.438 "zerocopy_threshold": 0, 00:15:06.438 "tls_version": 0, 00:15:06.438 "enable_ktls": false 00:15:06.438 } 00:15:06.438 }, 00:15:06.438 { 00:15:06.438 "method": "sock_impl_set_options", 00:15:06.438 "params": { 00:15:06.438 "impl_name": "uring", 00:15:06.438 "recv_buf_size": 2097152, 00:15:06.438 "send_buf_size": 2097152, 00:15:06.438 "enable_recv_pipe": true, 00:15:06.438 "enable_quickack": false, 00:15:06.438 "enable_placement_id": 0, 00:15:06.438 "enable_zerocopy_send_server": false, 00:15:06.438 "enable_zerocopy_send_client": false, 00:15:06.438 "zerocopy_threshold": 0, 00:15:06.438 "tls_version": 0, 00:15:06.438 "enable_ktls": false 00:15:06.438 } 00:15:06.438 } 00:15:06.438 ] 00:15:06.438 }, 00:15:06.438 { 00:15:06.438 "subsystem": "vmd", 00:15:06.438 "config": [] 00:15:06.438 }, 00:15:06.438 { 00:15:06.438 "subsystem": "accel", 00:15:06.438 "config": [ 00:15:06.438 { 00:15:06.438 "method": "accel_set_options", 00:15:06.438 "params": { 00:15:06.438 "small_cache_size": 128, 00:15:06.438 "large_cache_size": 16, 00:15:06.438 "task_count": 2048, 00:15:06.438 "sequence_count": 2048, 00:15:06.438 "buf_count": 2048 00:15:06.438 } 00:15:06.438 } 00:15:06.438 ] 00:15:06.438 }, 00:15:06.438 { 00:15:06.438 "subsystem": "bdev", 00:15:06.438 "config": [ 00:15:06.438 { 00:15:06.438 "method": "bdev_set_options", 00:15:06.438 "params": { 00:15:06.438 "bdev_io_pool_size": 65535, 00:15:06.438 "bdev_io_cache_size": 256, 00:15:06.438 "bdev_auto_examine": true, 00:15:06.438 "iobuf_small_cache_size": 128, 00:15:06.438 "iobuf_large_cache_size": 16 00:15:06.438 } 00:15:06.438 }, 00:15:06.438 { 00:15:06.438 "method": "bdev_raid_set_options", 00:15:06.438 "params": { 00:15:06.438 "process_window_size_kb": 1024, 00:15:06.438 "process_max_bandwidth_mb_sec": 0 00:15:06.438 } 00:15:06.438 }, 00:15:06.438 { 00:15:06.438 "method": "bdev_iscsi_set_options", 00:15:06.438 "params": { 00:15:06.438 "timeout_sec": 30 00:15:06.438 } 00:15:06.438 }, 00:15:06.438 { 00:15:06.438 "method": "bdev_nvme_set_options", 00:15:06.438 "params": { 00:15:06.438 "action_on_timeout": "none", 00:15:06.438 "timeout_us": 0, 00:15:06.438 "timeout_admin_us": 0, 00:15:06.438 "keep_alive_timeout_ms": 10000, 00:15:06.438 "arbitration_burst": 0, 00:15:06.438 "low_priority_weight": 0, 00:15:06.438 "medium_priority_weight": 0, 00:15:06.438 "high_priority_weight": 0, 00:15:06.438 "nvme_adminq_poll_period_us": 10000, 00:15:06.438 "nvme_ioq_poll_period_us": 0, 00:15:06.438 "io_queue_requests": 0, 00:15:06.438 "delay_cmd_submit": true, 00:15:06.438 "transport_retry_count": 4, 00:15:06.438 "bdev_retry_count": 3, 00:15:06.438 "transport_ack_timeout": 0, 00:15:06.438 "ctrlr_loss_timeout_sec": 0, 00:15:06.438 "reconnect_delay_sec": 0, 00:15:06.438 "fast_io_fail_timeout_sec": 0, 00:15:06.438 "disable_auto_failback": false, 00:15:06.438 "generate_uuids": false, 00:15:06.438 "transport_tos": 0, 00:15:06.438 "nvme_error_stat": false, 00:15:06.438 "rdma_srq_size": 0, 00:15:06.438 "io_path_stat": false, 00:15:06.438 "allow_accel_sequence": false, 00:15:06.438 "rdma_max_cq_size": 0, 00:15:06.438 "rdma_cm_event_timeout_ms": 0, 00:15:06.438 "dhchap_digests": [ 00:15:06.438 "sha256", 00:15:06.438 "sha384", 00:15:06.438 "sha512" 00:15:06.438 ], 00:15:06.438 "dhchap_dhgroups": [ 00:15:06.438 "null", 00:15:06.438 "ffdhe2048", 00:15:06.438 "ffdhe3072", 00:15:06.438 "ffdhe4096", 00:15:06.438 "ffdhe6144", 00:15:06.438 "ffdhe8192" 00:15:06.438 ] 00:15:06.438 } 00:15:06.438 }, 00:15:06.438 { 00:15:06.438 "method": "bdev_nvme_set_hotplug", 00:15:06.438 "params": { 00:15:06.438 "period_us": 100000, 00:15:06.438 "enable": false 00:15:06.438 } 00:15:06.438 }, 00:15:06.438 { 00:15:06.438 "method": "bdev_malloc_create", 00:15:06.438 "params": { 00:15:06.438 "name": "malloc0", 00:15:06.438 "num_blocks": 8192, 00:15:06.438 "block_size": 4096, 00:15:06.438 "physical_block_size": 4096, 00:15:06.438 "uuid": "b1b8698d-778f-4129-83ea-a3615e906768", 00:15:06.438 "optimal_io_boundary": 0, 00:15:06.438 "md_size": 0, 00:15:06.438 "dif_type": 0, 00:15:06.438 "dif_is_head_of_md": false, 00:15:06.438 "dif_pi_format": 0 00:15:06.438 } 00:15:06.438 }, 00:15:06.438 { 00:15:06.438 "method": "bdev_wait_for_examine" 00:15:06.438 } 00:15:06.438 ] 00:15:06.438 }, 00:15:06.438 { 00:15:06.438 "subsystem": "nbd", 00:15:06.438 "config": [] 00:15:06.438 }, 00:15:06.438 { 00:15:06.438 "subsystem": "scheduler", 00:15:06.438 "config": [ 00:15:06.438 { 00:15:06.438 "method": "framework_set_scheduler", 00:15:06.438 "params": { 00:15:06.438 "name": "static" 00:15:06.438 } 00:15:06.438 } 00:15:06.438 ] 00:15:06.438 }, 00:15:06.438 { 00:15:06.438 "subsystem": "nvmf", 00:15:06.438 "config": [ 00:15:06.438 { 00:15:06.438 "method": "nvmf_set_config", 00:15:06.438 "params": { 00:15:06.438 "discovery_filter": "match_any", 00:15:06.438 "admin_cmd_passthru": { 00:15:06.438 "identify_ctrlr": false 00:15:06.438 }, 00:15:06.438 "dhchap_digests": [ 00:15:06.438 "sha256", 00:15:06.438 "sha384", 00:15:06.438 "sha512" 00:15:06.438 ], 00:15:06.438 "dhchap_dhgroups": [ 00:15:06.438 "null", 00:15:06.438 "ffdhe2048", 00:15:06.438 "ffdhe3072", 00:15:06.438 "ffdhe4096", 00:15:06.438 "ffdhe6144", 00:15:06.438 "ffdhe8192" 00:15:06.438 ] 00:15:06.438 } 00:15:06.438 }, 00:15:06.438 { 00:15:06.438 "method": "nvmf_set_max_subsystems", 00:15:06.438 "params": { 00:15:06.438 "max_subsystems": 1024 00:15:06.438 } 00:15:06.438 }, 00:15:06.438 { 00:15:06.438 "method": "nvmf_set_crdt", 00:15:06.438 "params": { 00:15:06.438 "crdt1": 0, 00:15:06.438 "crdt2": 0, 00:15:06.438 "crdt3": 0 00:15:06.438 } 00:15:06.438 }, 00:15:06.438 { 00:15:06.438 "method": "nvmf_create_transport", 00:15:06.438 "params": { 00:15:06.438 "trtype": "TCP", 00:15:06.438 "max_queue_depth": 128, 00:15:06.438 "max_io_qpairs_per_ctrlr": 127, 00:15:06.438 "in_capsule_data_size": 4096, 00:15:06.438 "max_io_size": 131072, 00:15:06.439 "io_unit_size": 131072, 00:15:06.439 "max_aq_depth": 128, 00:15:06.439 "num_shared_buffers": 511, 00:15:06.439 "buf_cache_size": 4294967295, 00:15:06.439 "dif_insert_or_strip": false, 00:15:06.439 "zcopy": false, 00:15:06.439 "c2h_success": false, 00:15:06.439 "sock_priority": 0, 00:15:06.439 "abort_timeout_sec": 1, 00:15:06.439 "ack_timeout": 0, 00:15:06.439 "data_wr_pool_size": 0 00:15:06.439 } 00:15:06.439 }, 00:15:06.439 { 00:15:06.439 "method": "nvmf_create_subsystem", 00:15:06.439 "params": { 00:15:06.439 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.439 "allow_any_host": false, 00:15:06.439 "serial_number": "SPDK00000000000001", 00:15:06.439 "model_number": "SPDK bdev Controller", 00:15:06.439 "max_namespaces": 10, 00:15:06.439 "min_cntlid": 1, 00:15:06.439 "max_cntlid": 65519, 00:15:06.439 "ana_reporting": false 00:15:06.439 } 00:15:06.439 }, 00:15:06.439 { 00:15:06.439 "method": "nvmf_subsystem_add_host", 00:15:06.439 "params": { 00:15:06.439 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.439 "host": "nqn.2016-06.io.spdk:host1", 00:15:06.439 "psk": "key0" 00:15:06.439 } 00:15:06.439 }, 00:15:06.439 { 00:15:06.439 "method": "nvmf_subsystem_add_ns", 00:15:06.439 "params": { 00:15:06.439 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.439 "namespace": { 00:15:06.439 "nsid": 1, 00:15:06.439 "bdev_name": "malloc0", 00:15:06.439 "nguid": "B1B8698D778F412983EAA3615E906768", 00:15:06.439 "uuid": "b1b8698d-778f-4129-83ea-a3615e906768", 00:15:06.439 "no_auto_visible": false 00:15:06.439 } 00:15:06.439 } 00:15:06.439 }, 00:15:06.439 { 00:15:06.439 "method": "nvmf_subsystem_add_listener", 00:15:06.439 "params": { 00:15:06.439 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.439 "listen_address": { 00:15:06.439 "trtype": "TCP", 00:15:06.439 "adrfam": "IPv4", 00:15:06.439 "traddr": "10.0.0.3", 00:15:06.439 "trsvcid": "4420" 00:15:06.439 }, 00:15:06.439 "secure_channel": true 00:15:06.439 } 00:15:06.439 } 00:15:06.439 ] 00:15:06.439 } 00:15:06.439 ] 00:15:06.439 }' 00:15:06.439 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.439 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84035 00:15:06.439 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84035 00:15:06.439 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:06.439 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84035 ']' 00:15:06.439 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.439 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:06.439 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.439 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:06.439 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.698 [2024-12-08 18:32:24.408699] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:06.698 [2024-12-08 18:32:24.408979] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.698 [2024-12-08 18:32:24.543102] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.956 [2024-12-08 18:32:24.637097] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.956 [2024-12-08 18:32:24.637161] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.956 [2024-12-08 18:32:24.637171] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.956 [2024-12-08 18:32:24.637178] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.956 [2024-12-08 18:32:24.637184] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.956 [2024-12-08 18:32:24.637273] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.956 [2024-12-08 18:32:24.831934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:07.215 [2024-12-08 18:32:24.929254] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:07.215 [2024-12-08 18:32:24.970901] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:07.215 [2024-12-08 18:32:24.971259] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:07.782 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:07.782 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:07.782 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:07.782 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:07.782 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:07.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:07.782 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:07.782 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=84067 00:15:07.782 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 84067 /var/tmp/bdevperf.sock 00:15:07.782 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84067 ']' 00:15:07.782 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:07.782 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:07.782 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:07.782 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:07.782 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:07.782 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:07.782 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:15:07.782 "subsystems": [ 00:15:07.782 { 00:15:07.782 "subsystem": "keyring", 00:15:07.782 "config": [ 00:15:07.782 { 00:15:07.782 "method": "keyring_file_add_key", 00:15:07.782 "params": { 00:15:07.782 "name": "key0", 00:15:07.782 "path": "/tmp/tmp.BIPDQsMCYr" 00:15:07.782 } 00:15:07.782 } 00:15:07.782 ] 00:15:07.782 }, 00:15:07.782 { 00:15:07.782 "subsystem": "iobuf", 00:15:07.782 "config": [ 00:15:07.782 { 00:15:07.782 "method": "iobuf_set_options", 00:15:07.782 "params": { 00:15:07.782 "small_pool_count": 8192, 00:15:07.782 "large_pool_count": 1024, 00:15:07.782 "small_bufsize": 8192, 00:15:07.782 "large_bufsize": 135168 00:15:07.782 } 00:15:07.782 } 00:15:07.782 ] 00:15:07.782 }, 00:15:07.782 { 00:15:07.782 "subsystem": "sock", 00:15:07.782 "config": [ 00:15:07.782 { 00:15:07.782 "method": "sock_set_default_impl", 00:15:07.782 "params": { 00:15:07.782 "impl_name": "uring" 00:15:07.782 } 00:15:07.782 }, 00:15:07.782 { 00:15:07.782 "method": "sock_impl_set_options", 00:15:07.782 "params": { 00:15:07.782 "impl_name": "ssl", 00:15:07.782 "recv_buf_size": 4096, 00:15:07.782 "send_buf_size": 4096, 00:15:07.782 "enable_recv_pipe": true, 00:15:07.782 "enable_quickack": false, 00:15:07.782 "enable_placement_id": 0, 00:15:07.782 "enable_zerocopy_send_server": true, 00:15:07.782 "enable_zerocopy_send_client": false, 00:15:07.782 "zerocopy_threshold": 0, 00:15:07.782 "tls_version": 0, 00:15:07.782 "enable_ktls": false 00:15:07.782 } 00:15:07.782 }, 00:15:07.782 { 00:15:07.782 "method": "sock_impl_set_options", 00:15:07.782 "params": { 00:15:07.782 "impl_name": "posix", 00:15:07.782 "recv_buf_size": 2097152, 00:15:07.782 "send_buf_size": 2097152, 00:15:07.782 "enable_recv_pipe": true, 00:15:07.782 "enable_quickack": false, 00:15:07.782 "enable_placement_id": 0, 00:15:07.782 "enable_zerocopy_send_server": true, 00:15:07.782 "enable_zerocopy_send_client": false, 00:15:07.782 "zerocopy_threshold": 0, 00:15:07.782 "tls_version": 0, 00:15:07.782 "enable_ktls": false 00:15:07.782 } 00:15:07.782 }, 00:15:07.782 { 00:15:07.782 "method": "sock_impl_set_options", 00:15:07.782 "params": { 00:15:07.782 "impl_name": "uring", 00:15:07.782 "recv_buf_size": 2097152, 00:15:07.782 "send_buf_size": 2097152, 00:15:07.782 "enable_recv_pipe": true, 00:15:07.782 "enable_quickack": false, 00:15:07.782 "enable_placement_id": 0, 00:15:07.782 "enable_zerocopy_send_server": false, 00:15:07.782 "enable_zerocopy_send_client": false, 00:15:07.782 "zerocopy_threshold": 0, 00:15:07.782 "tls_version": 0, 00:15:07.782 "enable_ktls": false 00:15:07.782 } 00:15:07.782 } 00:15:07.782 ] 00:15:07.782 }, 00:15:07.782 { 00:15:07.782 "subsystem": "vmd", 00:15:07.782 "config": [] 00:15:07.782 }, 00:15:07.782 { 00:15:07.782 "subsystem": "accel", 00:15:07.782 "config": [ 00:15:07.782 { 00:15:07.782 "method": "accel_set_options", 00:15:07.782 "params": { 00:15:07.782 "small_cache_size": 128, 00:15:07.782 "large_cache_size": 16, 00:15:07.782 "task_count": 2048, 00:15:07.782 "sequence_count": 2048, 00:15:07.782 "buf_count": 2048 00:15:07.782 } 00:15:07.782 } 00:15:07.782 ] 00:15:07.782 }, 00:15:07.782 { 00:15:07.782 "subsystem": "bdev", 00:15:07.782 "config": [ 00:15:07.782 { 00:15:07.782 "method": "bdev_set_options", 00:15:07.782 "params": { 00:15:07.782 "bdev_io_pool_size": 65535, 00:15:07.782 "bdev_io_cache_size": 256, 00:15:07.782 "bdev_auto_examine": true, 00:15:07.782 "iobuf_small_cache_size": 128, 00:15:07.782 "iobuf_large_cache_size": 16 00:15:07.782 } 00:15:07.782 }, 00:15:07.782 { 00:15:07.782 "method": "bdev_raid_set_options", 00:15:07.782 "params": { 00:15:07.782 "process_window_size_kb": 1024, 00:15:07.782 "process_max_bandwidth_mb_sec": 0 00:15:07.782 } 00:15:07.782 }, 00:15:07.782 { 00:15:07.782 "method": "bdev_iscsi_set_options", 00:15:07.782 "params": { 00:15:07.782 "timeout_sec": 30 00:15:07.782 } 00:15:07.782 }, 00:15:07.782 { 00:15:07.782 "method": "bdev_nvme_set_options", 00:15:07.782 "params": { 00:15:07.782 "action_on_timeout": "none", 00:15:07.782 "timeout_us": 0, 00:15:07.782 "timeout_admin_us": 0, 00:15:07.782 "keep_alive_timeout_ms": 10000, 00:15:07.782 "arbitration_burst": 0, 00:15:07.782 "low_priority_weight": 0, 00:15:07.782 "medium_priority_weight": 0, 00:15:07.782 "high_priority_weight": 0, 00:15:07.782 "nvme_adminq_poll_period_us": 10000, 00:15:07.782 "nvme_ioq_poll_period_us": 0, 00:15:07.782 "io_queue_requests": 512, 00:15:07.782 "delay_cmd_submit": true, 00:15:07.782 "transport_retry_count": 4, 00:15:07.782 "bdev_retry_count": 3, 00:15:07.782 "transport_ack_timeout": 0, 00:15:07.782 "ctrlr_loss_timeout_sec": 0, 00:15:07.782 "reconnect_delay_sec": 0, 00:15:07.782 "fast_io_fail_timeout_sec": 0, 00:15:07.782 "disable_auto_failback": false, 00:15:07.782 "generate_uuids": false, 00:15:07.782 "transport_tos": 0, 00:15:07.782 "nvme_error_stat": false, 00:15:07.782 "rdma_srq_size": 0, 00:15:07.782 "io_path_stat": false, 00:15:07.782 "allow_accel_sequence": false, 00:15:07.782 "rdma_max_cq_size": 0, 00:15:07.782 "rdma_cm_event_timeout_ms": 0, 00:15:07.782 "dhchap_digests": [ 00:15:07.782 "sha256", 00:15:07.782 "sha384", 00:15:07.782 "sha512" 00:15:07.782 ], 00:15:07.782 "dhchap_dhgroups": [ 00:15:07.782 "null", 00:15:07.782 "ffdhe2048", 00:15:07.782 "ffdhe3072", 00:15:07.782 "ffdhe4096", 00:15:07.782 "ffdhe6144", 00:15:07.782 "ffdhe8192" 00:15:07.782 ] 00:15:07.782 } 00:15:07.782 }, 00:15:07.782 { 00:15:07.782 "method": "bdev_nvme_attach_controller", 00:15:07.782 "params": { 00:15:07.782 "name": "TLSTEST", 00:15:07.782 "trtype": "TCP", 00:15:07.782 "adrfam": "IPv4", 00:15:07.782 "traddr": "10.0.0.3", 00:15:07.782 "trsvcid": "4420", 00:15:07.782 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.782 "prchk_reftag": false, 00:15:07.782 "prchk_guard": false, 00:15:07.782 "ctrlr_loss_timeout_sec": 0, 00:15:07.782 "reconnect_delay_sec": 0, 00:15:07.782 "fast_io_fail_timeout_sec": 0, 00:15:07.782 "psk": "key0", 00:15:07.782 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:07.782 "hdgst": false, 00:15:07.782 "ddgst": false 00:15:07.782 } 00:15:07.782 }, 00:15:07.782 { 00:15:07.782 "method": "bdev_nvme_set_hotplug", 00:15:07.782 "params": { 00:15:07.782 "period_us": 100000, 00:15:07.782 "enable": false 00:15:07.782 } 00:15:07.782 }, 00:15:07.782 { 00:15:07.782 "method": "bdev_wait_for_examine" 00:15:07.782 } 00:15:07.782 ] 00:15:07.782 }, 00:15:07.782 { 00:15:07.782 "subsystem": "nbd", 00:15:07.782 "config": [] 00:15:07.782 } 00:15:07.782 ] 00:15:07.782 }' 00:15:07.782 [2024-12-08 18:32:25.510544] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:07.782 [2024-12-08 18:32:25.510866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84067 ] 00:15:07.782 [2024-12-08 18:32:25.652040] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.040 [2024-12-08 18:32:25.750474] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.040 [2024-12-08 18:32:25.891511] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:08.040 [2024-12-08 18:32:25.939159] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:08.606 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:08.606 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:08.606 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:08.863 Running I/O for 10 seconds... 00:15:10.735 4294.00 IOPS, 16.77 MiB/s [2024-12-08T18:32:30.046Z] 4332.00 IOPS, 16.92 MiB/s [2024-12-08T18:32:30.983Z] 4320.67 IOPS, 16.88 MiB/s [2024-12-08T18:32:31.920Z] 4350.50 IOPS, 16.99 MiB/s [2024-12-08T18:32:32.857Z] 4218.00 IOPS, 16.48 MiB/s [2024-12-08T18:32:33.794Z] 4011.50 IOPS, 15.67 MiB/s [2024-12-08T18:32:34.732Z] 3932.71 IOPS, 15.36 MiB/s [2024-12-08T18:32:35.760Z] 3978.88 IOPS, 15.54 MiB/s [2024-12-08T18:32:36.696Z] 4001.11 IOPS, 15.63 MiB/s [2024-12-08T18:32:36.696Z] 4061.50 IOPS, 15.87 MiB/s 00:15:18.766 Latency(us) 00:15:18.766 [2024-12-08T18:32:36.696Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.766 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:18.766 Verification LBA range: start 0x0 length 0x2000 00:15:18.766 TLSTESTn1 : 10.01 4068.20 15.89 0.00 0.00 31408.89 5332.25 31933.91 00:15:18.766 [2024-12-08T18:32:36.696Z] =================================================================================================================== 00:15:18.766 [2024-12-08T18:32:36.696Z] Total : 4068.20 15.89 0.00 0.00 31408.89 5332.25 31933.91 00:15:18.766 { 00:15:18.766 "results": [ 00:15:18.766 { 00:15:18.767 "job": "TLSTESTn1", 00:15:18.767 "core_mask": "0x4", 00:15:18.767 "workload": "verify", 00:15:18.767 "status": "finished", 00:15:18.767 "verify_range": { 00:15:18.767 "start": 0, 00:15:18.767 "length": 8192 00:15:18.767 }, 00:15:18.767 "queue_depth": 128, 00:15:18.767 "io_size": 4096, 00:15:18.767 "runtime": 10.014997, 00:15:18.767 "iops": 4068.1989220765618, 00:15:18.767 "mibps": 15.89140203936157, 00:15:18.767 "io_failed": 0, 00:15:18.767 "io_timeout": 0, 00:15:18.767 "avg_latency_us": 31408.887624377196, 00:15:18.767 "min_latency_us": 5332.2472727272725, 00:15:18.767 "max_latency_us": 31933.905454545453 00:15:18.767 } 00:15:18.767 ], 00:15:18.767 "core_count": 1 00:15:18.767 } 00:15:18.767 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:18.767 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 84067 00:15:18.767 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84067 ']' 00:15:18.767 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84067 00:15:18.767 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:18.767 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:18.767 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84067 00:15:19.026 killing process with pid 84067 00:15:19.026 Received shutdown signal, test time was about 10.000000 seconds 00:15:19.026 00:15:19.026 Latency(us) 00:15:19.026 [2024-12-08T18:32:36.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.026 [2024-12-08T18:32:36.956Z] =================================================================================================================== 00:15:19.026 [2024-12-08T18:32:36.956Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:19.026 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:19.026 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:19.026 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84067' 00:15:19.026 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84067 00:15:19.026 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84067 00:15:19.026 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 84035 00:15:19.026 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84035 ']' 00:15:19.026 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84035 00:15:19.026 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:19.026 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:19.026 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84035 00:15:19.026 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:19.026 killing process with pid 84035 00:15:19.027 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:19.027 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84035' 00:15:19.027 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84035 00:15:19.027 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84035 00:15:19.286 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:15:19.286 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:19.286 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:19.286 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:19.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.547 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84210 00:15:19.547 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:19.547 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84210 00:15:19.547 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84210 ']' 00:15:19.547 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.547 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:19.547 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.547 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:19.547 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:19.547 [2024-12-08 18:32:37.266752] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:19.547 [2024-12-08 18:32:37.267237] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.547 [2024-12-08 18:32:37.407538] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.806 [2024-12-08 18:32:37.491949] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.806 [2024-12-08 18:32:37.492320] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.806 [2024-12-08 18:32:37.492543] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.806 [2024-12-08 18:32:37.492755] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.806 [2024-12-08 18:32:37.492957] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.806 [2024-12-08 18:32:37.493006] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.806 [2024-12-08 18:32:37.550478] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:20.375 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:20.375 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:20.375 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:20.375 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:20.375 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:20.635 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:20.635 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.BIPDQsMCYr 00:15:20.635 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BIPDQsMCYr 00:15:20.635 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:20.635 [2024-12-08 18:32:38.525287] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:20.635 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:21.204 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:21.204 [2024-12-08 18:32:39.085361] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:21.204 [2024-12-08 18:32:39.085604] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:21.204 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:21.463 malloc0 00:15:21.463 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:21.723 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BIPDQsMCYr 00:15:21.982 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:22.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:22.330 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=84267 00:15:22.330 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:22.330 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 84267 /var/tmp/bdevperf.sock 00:15:22.330 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84267 ']' 00:15:22.330 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:22.330 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:22.330 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:22.330 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:22.330 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:22.330 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:22.330 [2024-12-08 18:32:40.116073] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:22.330 [2024-12-08 18:32:40.116162] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84267 ] 00:15:22.330 [2024-12-08 18:32:40.249697] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.588 [2024-12-08 18:32:40.309201] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.588 [2024-12-08 18:32:40.360153] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:22.588 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:22.588 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:22.588 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BIPDQsMCYr 00:15:22.846 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:23.105 [2024-12-08 18:32:40.905690] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:23.105 nvme0n1 00:15:23.105 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:23.364 Running I/O for 1 seconds... 00:15:24.300 4352.00 IOPS, 17.00 MiB/s 00:15:24.300 Latency(us) 00:15:24.300 [2024-12-08T18:32:42.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.300 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:24.300 Verification LBA range: start 0x0 length 0x2000 00:15:24.300 nvme0n1 : 1.02 4395.66 17.17 0.00 0.00 28848.05 10783.65 26095.24 00:15:24.300 [2024-12-08T18:32:42.230Z] =================================================================================================================== 00:15:24.300 [2024-12-08T18:32:42.230Z] Total : 4395.66 17.17 0.00 0.00 28848.05 10783.65 26095.24 00:15:24.300 { 00:15:24.300 "results": [ 00:15:24.300 { 00:15:24.300 "job": "nvme0n1", 00:15:24.300 "core_mask": "0x2", 00:15:24.300 "workload": "verify", 00:15:24.300 "status": "finished", 00:15:24.300 "verify_range": { 00:15:24.300 "start": 0, 00:15:24.300 "length": 8192 00:15:24.300 }, 00:15:24.300 "queue_depth": 128, 00:15:24.300 "io_size": 4096, 00:15:24.300 "runtime": 1.019188, 00:15:24.300 "iops": 4395.656149797682, 00:15:24.300 "mibps": 17.170531835147194, 00:15:24.300 "io_failed": 0, 00:15:24.300 "io_timeout": 0, 00:15:24.300 "avg_latency_us": 28848.054857142855, 00:15:24.300 "min_latency_us": 10783.65090909091, 00:15:24.300 "max_latency_us": 26095.243636363637 00:15:24.300 } 00:15:24.300 ], 00:15:24.300 "core_count": 1 00:15:24.300 } 00:15:24.300 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 84267 00:15:24.300 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84267 ']' 00:15:24.300 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84267 00:15:24.300 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:24.300 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:24.300 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84267 00:15:24.300 killing process with pid 84267 00:15:24.300 Received shutdown signal, test time was about 1.000000 seconds 00:15:24.300 00:15:24.300 Latency(us) 00:15:24.300 [2024-12-08T18:32:42.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.300 [2024-12-08T18:32:42.230Z] =================================================================================================================== 00:15:24.300 [2024-12-08T18:32:42.230Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:24.300 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:24.300 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:24.300 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84267' 00:15:24.300 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84267 00:15:24.300 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84267 00:15:24.559 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 84210 00:15:24.559 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84210 ']' 00:15:24.559 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84210 00:15:24.559 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:24.559 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:24.559 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84210 00:15:24.559 killing process with pid 84210 00:15:24.559 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:24.559 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:24.559 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84210' 00:15:24.559 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84210 00:15:24.559 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84210 00:15:24.817 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:15:24.818 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:24.818 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:24.818 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:24.818 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:24.818 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84306 00:15:24.818 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84306 00:15:24.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.818 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84306 ']' 00:15:24.818 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.818 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:24.818 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.818 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:24.818 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:24.818 [2024-12-08 18:32:42.646380] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:24.818 [2024-12-08 18:32:42.646761] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.075 [2024-12-08 18:32:42.781642] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.075 [2024-12-08 18:32:42.853856] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.075 [2024-12-08 18:32:42.853906] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.075 [2024-12-08 18:32:42.853917] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.075 [2024-12-08 18:32:42.853925] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.075 [2024-12-08 18:32:42.853931] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.075 [2024-12-08 18:32:42.853957] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.075 [2024-12-08 18:32:42.910821] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:26.013 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:26.013 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:26.013 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:26.013 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:26.013 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:26.013 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:26.013 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:15:26.013 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.013 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:26.013 [2024-12-08 18:32:43.690476] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:26.013 malloc0 00:15:26.013 [2024-12-08 18:32:43.748173] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:26.013 [2024-12-08 18:32:43.748516] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:26.013 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.013 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=84338 00:15:26.013 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:26.013 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 84338 /var/tmp/bdevperf.sock 00:15:26.013 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84338 ']' 00:15:26.013 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:26.013 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:26.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:26.013 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:26.013 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:26.013 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:26.013 [2024-12-08 18:32:43.820445] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:26.013 [2024-12-08 18:32:43.820567] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84338 ] 00:15:26.271 [2024-12-08 18:32:43.956466] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.271 [2024-12-08 18:32:44.042951] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.271 [2024-12-08 18:32:44.098188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:26.271 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:26.271 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:26.271 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BIPDQsMCYr 00:15:26.529 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:26.787 [2024-12-08 18:32:44.651297] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:27.046 nvme0n1 00:15:27.046 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:27.046 Running I/O for 1 seconds... 00:15:27.980 4393.00 IOPS, 17.16 MiB/s 00:15:27.980 Latency(us) 00:15:27.980 [2024-12-08T18:32:45.910Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.980 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:27.980 Verification LBA range: start 0x0 length 0x2000 00:15:27.980 nvme0n1 : 1.02 4452.66 17.39 0.00 0.00 28487.71 5600.35 24427.05 00:15:27.980 [2024-12-08T18:32:45.910Z] =================================================================================================================== 00:15:27.980 [2024-12-08T18:32:45.910Z] Total : 4452.66 17.39 0.00 0.00 28487.71 5600.35 24427.05 00:15:27.980 { 00:15:27.980 "results": [ 00:15:27.980 { 00:15:27.980 "job": "nvme0n1", 00:15:27.980 "core_mask": "0x2", 00:15:27.980 "workload": "verify", 00:15:27.980 "status": "finished", 00:15:27.980 "verify_range": { 00:15:27.980 "start": 0, 00:15:27.980 "length": 8192 00:15:27.980 }, 00:15:27.980 "queue_depth": 128, 00:15:27.980 "io_size": 4096, 00:15:27.980 "runtime": 1.015347, 00:15:27.980 "iops": 4452.664950997048, 00:15:27.980 "mibps": 17.39322246483222, 00:15:27.980 "io_failed": 0, 00:15:27.980 "io_timeout": 0, 00:15:27.980 "avg_latency_us": 28487.71088616758, 00:15:27.980 "min_latency_us": 5600.349090909091, 00:15:27.980 "max_latency_us": 24427.054545454546 00:15:27.980 } 00:15:27.980 ], 00:15:27.980 "core_count": 1 00:15:27.980 } 00:15:27.980 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:15:27.980 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.980 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.239 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.239 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:15:28.239 "subsystems": [ 00:15:28.239 { 00:15:28.239 "subsystem": "keyring", 00:15:28.239 "config": [ 00:15:28.239 { 00:15:28.239 "method": "keyring_file_add_key", 00:15:28.239 "params": { 00:15:28.239 "name": "key0", 00:15:28.239 "path": "/tmp/tmp.BIPDQsMCYr" 00:15:28.239 } 00:15:28.239 } 00:15:28.239 ] 00:15:28.239 }, 00:15:28.239 { 00:15:28.239 "subsystem": "iobuf", 00:15:28.239 "config": [ 00:15:28.239 { 00:15:28.239 "method": "iobuf_set_options", 00:15:28.239 "params": { 00:15:28.239 "small_pool_count": 8192, 00:15:28.239 "large_pool_count": 1024, 00:15:28.239 "small_bufsize": 8192, 00:15:28.239 "large_bufsize": 135168 00:15:28.239 } 00:15:28.239 } 00:15:28.239 ] 00:15:28.239 }, 00:15:28.239 { 00:15:28.239 "subsystem": "sock", 00:15:28.239 "config": [ 00:15:28.239 { 00:15:28.239 "method": "sock_set_default_impl", 00:15:28.239 "params": { 00:15:28.239 "impl_name": "uring" 00:15:28.239 } 00:15:28.239 }, 00:15:28.239 { 00:15:28.239 "method": "sock_impl_set_options", 00:15:28.239 "params": { 00:15:28.239 "impl_name": "ssl", 00:15:28.239 "recv_buf_size": 4096, 00:15:28.239 "send_buf_size": 4096, 00:15:28.239 "enable_recv_pipe": true, 00:15:28.239 "enable_quickack": false, 00:15:28.239 "enable_placement_id": 0, 00:15:28.239 "enable_zerocopy_send_server": true, 00:15:28.239 "enable_zerocopy_send_client": false, 00:15:28.239 "zerocopy_threshold": 0, 00:15:28.239 "tls_version": 0, 00:15:28.239 "enable_ktls": false 00:15:28.239 } 00:15:28.239 }, 00:15:28.239 { 00:15:28.239 "method": "sock_impl_set_options", 00:15:28.239 "params": { 00:15:28.239 "impl_name": "posix", 00:15:28.239 "recv_buf_size": 2097152, 00:15:28.239 "send_buf_size": 2097152, 00:15:28.239 "enable_recv_pipe": true, 00:15:28.239 "enable_quickack": false, 00:15:28.239 "enable_placement_id": 0, 00:15:28.239 "enable_zerocopy_send_server": true, 00:15:28.239 "enable_zerocopy_send_client": false, 00:15:28.239 "zerocopy_threshold": 0, 00:15:28.239 "tls_version": 0, 00:15:28.239 "enable_ktls": false 00:15:28.239 } 00:15:28.239 }, 00:15:28.239 { 00:15:28.239 "method": "sock_impl_set_options", 00:15:28.239 "params": { 00:15:28.239 "impl_name": "uring", 00:15:28.239 "recv_buf_size": 2097152, 00:15:28.239 "send_buf_size": 2097152, 00:15:28.239 "enable_recv_pipe": true, 00:15:28.239 "enable_quickack": false, 00:15:28.239 "enable_placement_id": 0, 00:15:28.239 "enable_zerocopy_send_server": false, 00:15:28.239 "enable_zerocopy_send_client": false, 00:15:28.239 "zerocopy_threshold": 0, 00:15:28.239 "tls_version": 0, 00:15:28.239 "enable_ktls": false 00:15:28.239 } 00:15:28.239 } 00:15:28.239 ] 00:15:28.239 }, 00:15:28.239 { 00:15:28.239 "subsystem": "vmd", 00:15:28.239 "config": [] 00:15:28.239 }, 00:15:28.239 { 00:15:28.239 "subsystem": "accel", 00:15:28.239 "config": [ 00:15:28.239 { 00:15:28.239 "method": "accel_set_options", 00:15:28.239 "params": { 00:15:28.239 "small_cache_size": 128, 00:15:28.239 "large_cache_size": 16, 00:15:28.239 "task_count": 2048, 00:15:28.239 "sequence_count": 2048, 00:15:28.239 "buf_count": 2048 00:15:28.239 } 00:15:28.239 } 00:15:28.239 ] 00:15:28.239 }, 00:15:28.239 { 00:15:28.239 "subsystem": "bdev", 00:15:28.239 "config": [ 00:15:28.239 { 00:15:28.239 "method": "bdev_set_options", 00:15:28.239 "params": { 00:15:28.239 "bdev_io_pool_size": 65535, 00:15:28.239 "bdev_io_cache_size": 256, 00:15:28.239 "bdev_auto_examine": true, 00:15:28.239 "iobuf_small_cache_size": 128, 00:15:28.239 "iobuf_large_cache_size": 16 00:15:28.239 } 00:15:28.239 }, 00:15:28.239 { 00:15:28.239 "method": "bdev_raid_set_options", 00:15:28.239 "params": { 00:15:28.239 "process_window_size_kb": 1024, 00:15:28.239 "process_max_bandwidth_mb_sec": 0 00:15:28.239 } 00:15:28.239 }, 00:15:28.239 { 00:15:28.239 "method": "bdev_iscsi_set_options", 00:15:28.239 "params": { 00:15:28.239 "timeout_sec": 30 00:15:28.239 } 00:15:28.239 }, 00:15:28.239 { 00:15:28.239 "method": "bdev_nvme_set_options", 00:15:28.239 "params": { 00:15:28.239 "action_on_timeout": "none", 00:15:28.239 "timeout_us": 0, 00:15:28.239 "timeout_admin_us": 0, 00:15:28.239 "keep_alive_timeout_ms": 10000, 00:15:28.239 "arbitration_burst": 0, 00:15:28.239 "low_priority_weight": 0, 00:15:28.239 "medium_priority_weight": 0, 00:15:28.239 "high_priority_weight": 0, 00:15:28.239 "nvme_adminq_poll_period_us": 10000, 00:15:28.239 "nvme_ioq_poll_period_us": 0, 00:15:28.239 "io_queue_requests": 0, 00:15:28.239 "delay_cmd_submit": true, 00:15:28.239 "transport_retry_count": 4, 00:15:28.239 "bdev_retry_count": 3, 00:15:28.239 "transport_ack_timeout": 0, 00:15:28.239 "ctrlr_loss_timeout_sec": 0, 00:15:28.239 "reconnect_delay_sec": 0, 00:15:28.239 "fast_io_fail_timeout_sec": 0, 00:15:28.239 "disable_auto_failback": false, 00:15:28.239 "generate_uuids": false, 00:15:28.239 "transport_tos": 0, 00:15:28.239 "nvme_error_stat": false, 00:15:28.239 "rdma_srq_size": 0, 00:15:28.239 "io_path_stat": false, 00:15:28.239 "allow_accel_sequence": false, 00:15:28.239 "rdma_max_cq_size": 0, 00:15:28.239 "rdma_cm_event_timeout_ms": 0, 00:15:28.239 "dhchap_digests": [ 00:15:28.239 "sha256", 00:15:28.239 "sha384", 00:15:28.239 "sha512" 00:15:28.239 ], 00:15:28.239 "dhchap_dhgroups": [ 00:15:28.239 "null", 00:15:28.239 "ffdhe2048", 00:15:28.239 "ffdhe3072", 00:15:28.239 "ffdhe4096", 00:15:28.239 "ffdhe6144", 00:15:28.239 "ffdhe8192" 00:15:28.239 ] 00:15:28.239 } 00:15:28.239 }, 00:15:28.239 { 00:15:28.239 "method": "bdev_nvme_set_hotplug", 00:15:28.239 "params": { 00:15:28.239 "period_us": 100000, 00:15:28.239 "enable": false 00:15:28.239 } 00:15:28.239 }, 00:15:28.239 { 00:15:28.239 "method": "bdev_malloc_create", 00:15:28.239 "params": { 00:15:28.239 "name": "malloc0", 00:15:28.239 "num_blocks": 8192, 00:15:28.239 "block_size": 4096, 00:15:28.239 "physical_block_size": 4096, 00:15:28.239 "uuid": "3dc680d6-256e-4d80-9e3d-e9bb55125d1b", 00:15:28.239 "optimal_io_boundary": 0, 00:15:28.239 "md_size": 0, 00:15:28.239 "dif_type": 0, 00:15:28.239 "dif_is_head_of_md": false, 00:15:28.239 "dif_pi_format": 0 00:15:28.239 } 00:15:28.239 }, 00:15:28.239 { 00:15:28.239 "method": "bdev_wait_for_examine" 00:15:28.239 } 00:15:28.239 ] 00:15:28.239 }, 00:15:28.239 { 00:15:28.239 "subsystem": "nbd", 00:15:28.239 "config": [] 00:15:28.239 }, 00:15:28.239 { 00:15:28.239 "subsystem": "scheduler", 00:15:28.239 "config": [ 00:15:28.239 { 00:15:28.239 "method": "framework_set_scheduler", 00:15:28.239 "params": { 00:15:28.239 "name": "static" 00:15:28.239 } 00:15:28.239 } 00:15:28.239 ] 00:15:28.239 }, 00:15:28.239 { 00:15:28.239 "subsystem": "nvmf", 00:15:28.239 "config": [ 00:15:28.239 { 00:15:28.239 "method": "nvmf_set_config", 00:15:28.239 "params": { 00:15:28.239 "discovery_filter": "match_any", 00:15:28.239 "admin_cmd_passthru": { 00:15:28.239 "identify_ctrlr": false 00:15:28.239 }, 00:15:28.239 "dhchap_digests": [ 00:15:28.239 "sha256", 00:15:28.239 "sha384", 00:15:28.239 "sha512" 00:15:28.239 ], 00:15:28.239 "dhchap_dhgroups": [ 00:15:28.239 "null", 00:15:28.239 "ffdhe2048", 00:15:28.240 "ffdhe3072", 00:15:28.240 "ffdhe4096", 00:15:28.240 "ffdhe6144", 00:15:28.240 "ffdhe8192" 00:15:28.240 ] 00:15:28.240 } 00:15:28.240 }, 00:15:28.240 { 00:15:28.240 "method": "nvmf_set_max_subsystems", 00:15:28.240 "params": { 00:15:28.240 "max_subsystems": 1024 00:15:28.240 } 00:15:28.240 }, 00:15:28.240 { 00:15:28.240 "method": "nvmf_set_crdt", 00:15:28.240 "params": { 00:15:28.240 "crdt1": 0, 00:15:28.240 "crdt2": 0, 00:15:28.240 "crdt3": 0 00:15:28.240 } 00:15:28.240 }, 00:15:28.240 { 00:15:28.240 "method": "nvmf_create_transport", 00:15:28.240 "params": { 00:15:28.240 "trtype": "TCP", 00:15:28.240 "max_queue_depth": 128, 00:15:28.240 "max_io_qpairs_per_ctrlr": 127, 00:15:28.240 "in_capsule_data_size": 4096, 00:15:28.240 "max_io_size": 131072, 00:15:28.240 "io_unit_size": 131072, 00:15:28.240 "max_aq_depth": 128, 00:15:28.240 "num_shared_buffers": 511, 00:15:28.240 "buf_cache_size": 4294967295, 00:15:28.240 "dif_insert_or_strip": false, 00:15:28.240 "zcopy": false, 00:15:28.240 "c2h_success": false, 00:15:28.240 "sock_priority": 0, 00:15:28.240 "abort_timeout_sec": 1, 00:15:28.240 "ack_timeout": 0, 00:15:28.240 "data_wr_pool_size": 0 00:15:28.240 } 00:15:28.240 }, 00:15:28.240 { 00:15:28.240 "method": "nvmf_create_subsystem", 00:15:28.240 "params": { 00:15:28.240 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:28.240 "allow_any_host": false, 00:15:28.240 "serial_number": "00000000000000000000", 00:15:28.240 "model_number": "SPDK bdev Controller", 00:15:28.240 "max_namespaces": 32, 00:15:28.240 "min_cntlid": 1, 00:15:28.240 "max_cntlid": 65519, 00:15:28.240 "ana_reporting": false 00:15:28.240 } 00:15:28.240 }, 00:15:28.240 { 00:15:28.240 "method": "nvmf_subsystem_add_host", 00:15:28.240 "params": { 00:15:28.240 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:28.240 "host": "nqn.2016-06.io.spdk:host1", 00:15:28.240 "psk": "key0" 00:15:28.240 } 00:15:28.240 }, 00:15:28.240 { 00:15:28.240 "method": "nvmf_subsystem_add_ns", 00:15:28.240 "params": { 00:15:28.240 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:28.240 "namespace": { 00:15:28.240 "nsid": 1, 00:15:28.240 "bdev_name": "malloc0", 00:15:28.240 "nguid": "3DC680D6256E4D809E3DE9BB55125D1B", 00:15:28.240 "uuid": "3dc680d6-256e-4d80-9e3d-e9bb55125d1b", 00:15:28.240 "no_auto_visible": false 00:15:28.240 } 00:15:28.240 } 00:15:28.240 }, 00:15:28.240 { 00:15:28.240 "method": "nvmf_subsystem_add_listener", 00:15:28.240 "params": { 00:15:28.240 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:28.240 "listen_address": { 00:15:28.240 "trtype": "TCP", 00:15:28.240 "adrfam": "IPv4", 00:15:28.240 "traddr": "10.0.0.3", 00:15:28.240 "trsvcid": "4420" 00:15:28.240 }, 00:15:28.240 "secure_channel": false, 00:15:28.240 "sock_impl": "ssl" 00:15:28.240 } 00:15:28.240 } 00:15:28.240 ] 00:15:28.240 } 00:15:28.240 ] 00:15:28.240 }' 00:15:28.240 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:28.499 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:15:28.499 "subsystems": [ 00:15:28.499 { 00:15:28.499 "subsystem": "keyring", 00:15:28.499 "config": [ 00:15:28.499 { 00:15:28.499 "method": "keyring_file_add_key", 00:15:28.499 "params": { 00:15:28.499 "name": "key0", 00:15:28.499 "path": "/tmp/tmp.BIPDQsMCYr" 00:15:28.499 } 00:15:28.499 } 00:15:28.499 ] 00:15:28.499 }, 00:15:28.499 { 00:15:28.499 "subsystem": "iobuf", 00:15:28.499 "config": [ 00:15:28.499 { 00:15:28.499 "method": "iobuf_set_options", 00:15:28.499 "params": { 00:15:28.499 "small_pool_count": 8192, 00:15:28.499 "large_pool_count": 1024, 00:15:28.499 "small_bufsize": 8192, 00:15:28.499 "large_bufsize": 135168 00:15:28.499 } 00:15:28.499 } 00:15:28.499 ] 00:15:28.499 }, 00:15:28.499 { 00:15:28.499 "subsystem": "sock", 00:15:28.499 "config": [ 00:15:28.499 { 00:15:28.499 "method": "sock_set_default_impl", 00:15:28.499 "params": { 00:15:28.499 "impl_name": "uring" 00:15:28.499 } 00:15:28.499 }, 00:15:28.499 { 00:15:28.499 "method": "sock_impl_set_options", 00:15:28.499 "params": { 00:15:28.499 "impl_name": "ssl", 00:15:28.499 "recv_buf_size": 4096, 00:15:28.499 "send_buf_size": 4096, 00:15:28.499 "enable_recv_pipe": true, 00:15:28.499 "enable_quickack": false, 00:15:28.499 "enable_placement_id": 0, 00:15:28.499 "enable_zerocopy_send_server": true, 00:15:28.499 "enable_zerocopy_send_client": false, 00:15:28.499 "zerocopy_threshold": 0, 00:15:28.499 "tls_version": 0, 00:15:28.499 "enable_ktls": false 00:15:28.499 } 00:15:28.499 }, 00:15:28.499 { 00:15:28.499 "method": "sock_impl_set_options", 00:15:28.499 "params": { 00:15:28.499 "impl_name": "posix", 00:15:28.499 "recv_buf_size": 2097152, 00:15:28.499 "send_buf_size": 2097152, 00:15:28.499 "enable_recv_pipe": true, 00:15:28.499 "enable_quickack": false, 00:15:28.499 "enable_placement_id": 0, 00:15:28.499 "enable_zerocopy_send_server": true, 00:15:28.499 "enable_zerocopy_send_client": false, 00:15:28.499 "zerocopy_threshold": 0, 00:15:28.499 "tls_version": 0, 00:15:28.499 "enable_ktls": false 00:15:28.499 } 00:15:28.499 }, 00:15:28.499 { 00:15:28.499 "method": "sock_impl_set_options", 00:15:28.499 "params": { 00:15:28.499 "impl_name": "uring", 00:15:28.499 "recv_buf_size": 2097152, 00:15:28.499 "send_buf_size": 2097152, 00:15:28.499 "enable_recv_pipe": true, 00:15:28.499 "enable_quickack": false, 00:15:28.499 "enable_placement_id": 0, 00:15:28.499 "enable_zerocopy_send_server": false, 00:15:28.499 "enable_zerocopy_send_client": false, 00:15:28.499 "zerocopy_threshold": 0, 00:15:28.499 "tls_version": 0, 00:15:28.499 "enable_ktls": false 00:15:28.499 } 00:15:28.499 } 00:15:28.499 ] 00:15:28.499 }, 00:15:28.499 { 00:15:28.499 "subsystem": "vmd", 00:15:28.499 "config": [] 00:15:28.499 }, 00:15:28.499 { 00:15:28.499 "subsystem": "accel", 00:15:28.499 "config": [ 00:15:28.499 { 00:15:28.499 "method": "accel_set_options", 00:15:28.499 "params": { 00:15:28.499 "small_cache_size": 128, 00:15:28.499 "large_cache_size": 16, 00:15:28.499 "task_count": 2048, 00:15:28.499 "sequence_count": 2048, 00:15:28.499 "buf_count": 2048 00:15:28.499 } 00:15:28.499 } 00:15:28.499 ] 00:15:28.499 }, 00:15:28.499 { 00:15:28.499 "subsystem": "bdev", 00:15:28.499 "config": [ 00:15:28.499 { 00:15:28.499 "method": "bdev_set_options", 00:15:28.499 "params": { 00:15:28.499 "bdev_io_pool_size": 65535, 00:15:28.499 "bdev_io_cache_size": 256, 00:15:28.499 "bdev_auto_examine": true, 00:15:28.499 "iobuf_small_cache_size": 128, 00:15:28.499 "iobuf_large_cache_size": 16 00:15:28.499 } 00:15:28.499 }, 00:15:28.499 { 00:15:28.499 "method": "bdev_raid_set_options", 00:15:28.499 "params": { 00:15:28.499 "process_window_size_kb": 1024, 00:15:28.499 "process_max_bandwidth_mb_sec": 0 00:15:28.499 } 00:15:28.499 }, 00:15:28.499 { 00:15:28.499 "method": "bdev_iscsi_set_options", 00:15:28.499 "params": { 00:15:28.499 "timeout_sec": 30 00:15:28.499 } 00:15:28.499 }, 00:15:28.499 { 00:15:28.499 "method": "bdev_nvme_set_options", 00:15:28.499 "params": { 00:15:28.499 "action_on_timeout": "none", 00:15:28.499 "timeout_us": 0, 00:15:28.499 "timeout_admin_us": 0, 00:15:28.499 "keep_alive_timeout_ms": 10000, 00:15:28.499 "arbitration_burst": 0, 00:15:28.499 "low_priority_weight": 0, 00:15:28.499 "medium_priority_weight": 0, 00:15:28.499 "high_priority_weight": 0, 00:15:28.499 "nvme_adminq_poll_period_us": 10000, 00:15:28.499 "nvme_ioq_poll_period_us": 0, 00:15:28.499 "io_queue_requests": 512, 00:15:28.499 "delay_cmd_submit": true, 00:15:28.499 "transport_retry_count": 4, 00:15:28.499 "bdev_retry_count": 3, 00:15:28.499 "transport_ack_timeout": 0, 00:15:28.499 "ctrlr_loss_timeout_sec": 0, 00:15:28.499 "reconnect_delay_sec": 0, 00:15:28.499 "fast_io_fail_timeout_sec": 0, 00:15:28.499 "disable_auto_failback": false, 00:15:28.499 "generate_uuids": false, 00:15:28.499 "transport_tos": 0, 00:15:28.499 "nvme_error_stat": false, 00:15:28.499 "rdma_srq_size": 0, 00:15:28.499 "io_path_stat": false, 00:15:28.499 "allow_accel_sequence": false, 00:15:28.499 "rdma_max_cq_size": 0, 00:15:28.499 "rdma_cm_event_timeout_ms": 0, 00:15:28.499 "dhchap_digests": [ 00:15:28.499 "sha256", 00:15:28.499 "sha384", 00:15:28.499 "sha512" 00:15:28.499 ], 00:15:28.499 "dhchap_dhgroups": [ 00:15:28.499 "null", 00:15:28.499 "ffdhe2048", 00:15:28.499 "ffdhe3072", 00:15:28.499 "ffdhe4096", 00:15:28.499 "ffdhe6144", 00:15:28.499 "ffdhe8192" 00:15:28.499 ] 00:15:28.499 } 00:15:28.499 }, 00:15:28.499 { 00:15:28.499 "method": "bdev_nvme_attach_controller", 00:15:28.499 "params": { 00:15:28.499 "name": "nvme0", 00:15:28.499 "trtype": "TCP", 00:15:28.499 "adrfam": "IPv4", 00:15:28.499 "traddr": "10.0.0.3", 00:15:28.499 "trsvcid": "4420", 00:15:28.499 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:28.499 "prchk_reftag": false, 00:15:28.499 "prchk_guard": false, 00:15:28.499 "ctrlr_loss_timeout_sec": 0, 00:15:28.499 "reconnect_delay_sec": 0, 00:15:28.499 "fast_io_fail_timeout_sec": 0, 00:15:28.499 "psk": "key0", 00:15:28.499 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:28.499 "hdgst": false, 00:15:28.500 "ddgst": false 00:15:28.500 } 00:15:28.500 }, 00:15:28.500 { 00:15:28.500 "method": "bdev_nvme_set_hotplug", 00:15:28.500 "params": { 00:15:28.500 "period_us": 100000, 00:15:28.500 "enable": false 00:15:28.500 } 00:15:28.500 }, 00:15:28.500 { 00:15:28.500 "method": "bdev_enable_histogram", 00:15:28.500 "params": { 00:15:28.500 "name": "nvme0n1", 00:15:28.500 "enable": true 00:15:28.500 } 00:15:28.500 }, 00:15:28.500 { 00:15:28.500 "method": "bdev_wait_for_examine" 00:15:28.500 } 00:15:28.500 ] 00:15:28.500 }, 00:15:28.500 { 00:15:28.500 "subsystem": "nbd", 00:15:28.500 "config": [] 00:15:28.500 } 00:15:28.500 ] 00:15:28.500 }' 00:15:28.500 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 84338 00:15:28.500 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84338 ']' 00:15:28.500 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84338 00:15:28.500 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:28.500 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:28.500 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84338 00:15:28.500 killing process with pid 84338 00:15:28.500 Received shutdown signal, test time was about 1.000000 seconds 00:15:28.500 00:15:28.500 Latency(us) 00:15:28.500 [2024-12-08T18:32:46.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.500 [2024-12-08T18:32:46.430Z] =================================================================================================================== 00:15:28.500 [2024-12-08T18:32:46.430Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:28.500 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:28.500 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:28.500 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84338' 00:15:28.500 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84338 00:15:28.500 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84338 00:15:28.759 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 84306 00:15:28.759 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84306 ']' 00:15:28.759 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84306 00:15:28.759 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:28.759 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:28.759 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84306 00:15:28.759 killing process with pid 84306 00:15:28.759 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:28.759 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:28.759 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84306' 00:15:28.759 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84306 00:15:28.759 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84306 00:15:29.019 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:15:29.019 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:29.019 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:29.019 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:15:29.019 "subsystems": [ 00:15:29.019 { 00:15:29.019 "subsystem": "keyring", 00:15:29.019 "config": [ 00:15:29.019 { 00:15:29.019 "method": "keyring_file_add_key", 00:15:29.019 "params": { 00:15:29.019 "name": "key0", 00:15:29.019 "path": "/tmp/tmp.BIPDQsMCYr" 00:15:29.019 } 00:15:29.019 } 00:15:29.019 ] 00:15:29.019 }, 00:15:29.019 { 00:15:29.019 "subsystem": "iobuf", 00:15:29.019 "config": [ 00:15:29.019 { 00:15:29.019 "method": "iobuf_set_options", 00:15:29.019 "params": { 00:15:29.019 "small_pool_count": 8192, 00:15:29.019 "large_pool_count": 1024, 00:15:29.019 "small_bufsize": 8192, 00:15:29.019 "large_bufsize": 135168 00:15:29.019 } 00:15:29.019 } 00:15:29.019 ] 00:15:29.019 }, 00:15:29.019 { 00:15:29.019 "subsystem": "sock", 00:15:29.019 "config": [ 00:15:29.019 { 00:15:29.019 "method": "sock_set_default_impl", 00:15:29.019 "params": { 00:15:29.019 "impl_name": "uring" 00:15:29.019 } 00:15:29.019 }, 00:15:29.019 { 00:15:29.019 "method": "sock_impl_set_options", 00:15:29.019 "params": { 00:15:29.019 "impl_name": "ssl", 00:15:29.019 "recv_buf_size": 4096, 00:15:29.019 "send_buf_size": 4096, 00:15:29.019 "enable_recv_pipe": true, 00:15:29.019 "enable_quickack": false, 00:15:29.019 "enable_placement_id": 0, 00:15:29.019 "enable_zerocopy_send_server": true, 00:15:29.019 "enable_zerocopy_send_client": false, 00:15:29.019 "zerocopy_threshold": 0, 00:15:29.019 "tls_version": 0, 00:15:29.019 "enable_ktls": false 00:15:29.019 } 00:15:29.019 }, 00:15:29.019 { 00:15:29.019 "method": "sock_impl_set_options", 00:15:29.019 "params": { 00:15:29.019 "impl_name": "posix", 00:15:29.019 "recv_buf_size": 2097152, 00:15:29.019 "send_buf_size": 2097152, 00:15:29.019 "enable_recv_pipe": true, 00:15:29.019 "enable_quickack": false, 00:15:29.019 "enable_placement_id": 0, 00:15:29.019 "enable_zerocopy_send_server": true, 00:15:29.019 "enable_zerocopy_send_client": false, 00:15:29.019 "zerocopy_threshold": 0, 00:15:29.019 "tls_version": 0, 00:15:29.019 "enable_ktls": false 00:15:29.019 } 00:15:29.019 }, 00:15:29.019 { 00:15:29.019 "method": "sock_impl_set_options", 00:15:29.019 "params": { 00:15:29.019 "impl_name": "uring", 00:15:29.019 "recv_buf_size": 2097152, 00:15:29.019 "send_buf_size": 2097152, 00:15:29.019 "enable_recv_pipe": true, 00:15:29.019 "enable_quickack": false, 00:15:29.019 "enable_placement_id": 0, 00:15:29.019 "enable_zerocopy_send_server": false, 00:15:29.019 "enable_zerocopy_send_client": false, 00:15:29.019 "zerocopy_threshold": 0, 00:15:29.019 "tls_version": 0, 00:15:29.019 "enable_ktls": false 00:15:29.019 } 00:15:29.019 } 00:15:29.019 ] 00:15:29.019 }, 00:15:29.019 { 00:15:29.019 "subsystem": "vmd", 00:15:29.019 "config": [] 00:15:29.019 }, 00:15:29.019 { 00:15:29.019 "subsystem": "accel", 00:15:29.019 "config": [ 00:15:29.019 { 00:15:29.019 "method": "accel_set_options", 00:15:29.019 "params": { 00:15:29.019 "small_cache_size": 128, 00:15:29.019 "large_cache_size": 16, 00:15:29.019 "task_count": 2048, 00:15:29.019 "sequence_count": 2048, 00:15:29.019 "buf_count": 2048 00:15:29.019 } 00:15:29.019 } 00:15:29.019 ] 00:15:29.019 }, 00:15:29.019 { 00:15:29.019 "subsystem": "bdev", 00:15:29.019 "config": [ 00:15:29.019 { 00:15:29.019 "method": "bdev_set_options", 00:15:29.019 "params": { 00:15:29.019 "bdev_io_pool_size": 65535, 00:15:29.019 "bdev_io_cache_size": 256, 00:15:29.019 "bdev_auto_examine": true, 00:15:29.019 "iobuf_small_cache_size": 128, 00:15:29.019 "iobuf_large_cache_size": 16 00:15:29.019 } 00:15:29.019 }, 00:15:29.019 { 00:15:29.019 "method": "bdev_raid_set_options", 00:15:29.019 "params": { 00:15:29.019 "process_window_size_kb": 1024, 00:15:29.019 "process_max_bandwidth_mb_sec": 0 00:15:29.019 } 00:15:29.019 }, 00:15:29.019 { 00:15:29.019 "method": "bdev_iscsi_set_options", 00:15:29.019 "params": { 00:15:29.019 "timeout_sec": 30 00:15:29.019 } 00:15:29.019 }, 00:15:29.019 { 00:15:29.019 "method": "bdev_nvme_set_options", 00:15:29.019 "params": { 00:15:29.019 "action_on_timeout": "none", 00:15:29.019 "timeout_us": 0, 00:15:29.019 "timeout_admin_us": 0, 00:15:29.019 "keep_alive_timeout_ms": 10000, 00:15:29.019 "arbitration_burst": 0, 00:15:29.019 "low_priority_weight": 0, 00:15:29.019 "medium_priority_weight": 0, 00:15:29.019 "high_priority_weight": 0, 00:15:29.019 "nvme_adminq_poll_period_us": 10000, 00:15:29.019 "nvme_ioq_poll_period_us": 0, 00:15:29.019 "io_queue_requests": 0, 00:15:29.019 "delay_cmd_submit": true, 00:15:29.019 "transport_retry_count": 4, 00:15:29.019 "bdev_retry_count": 3, 00:15:29.019 "transport_ack_timeout": 0, 00:15:29.019 "ctrlr_loss_timeout_sec": 0, 00:15:29.019 "reconnect_delay_sec": 0, 00:15:29.019 "fast_io_fail_timeout_sec": 0, 00:15:29.019 "disable_auto_failback": false, 00:15:29.019 "generate_uuids": false, 00:15:29.019 "transport_tos": 0, 00:15:29.019 "nvme_error_stat": false, 00:15:29.019 "rdma_srq_size": 0, 00:15:29.019 "io_path_stat": false, 00:15:29.019 "allow_accel_sequence": false, 00:15:29.019 "rdma_max_cq_size": 0, 00:15:29.019 "rdma_cm_event_timeout_ms": 0, 00:15:29.019 "dhchap_digests": [ 00:15:29.019 "sha256", 00:15:29.019 "sha384", 00:15:29.019 "sha512" 00:15:29.019 ], 00:15:29.019 "dhchap_dhgroups": [ 00:15:29.019 "null", 00:15:29.019 "ffdhe2048", 00:15:29.019 "ffdhe3072", 00:15:29.019 "ffdhe4096", 00:15:29.019 "ffdhe6144", 00:15:29.019 "ffdhe8192" 00:15:29.019 ] 00:15:29.019 } 00:15:29.019 }, 00:15:29.019 { 00:15:29.019 "method": "bdev_nvme_set_hotplug", 00:15:29.019 "params": { 00:15:29.019 "period_us": 100000, 00:15:29.019 "enable": false 00:15:29.019 } 00:15:29.019 }, 00:15:29.019 { 00:15:29.019 "method": "bdev_malloc_create", 00:15:29.019 "params": { 00:15:29.019 "name": "malloc0", 00:15:29.019 "num_blocks": 8192, 00:15:29.019 "block_size": 4096, 00:15:29.019 "physical_block_size": 4096, 00:15:29.019 "uuid": "3dc680d6-256e-4d80-9e3d-e9bb55125d1b", 00:15:29.019 "optimal_io_boundary": 0, 00:15:29.019 "md_size": 0, 00:15:29.019 "dif_type": 0, 00:15:29.019 "dif_is_head_of_md": false, 00:15:29.019 "dif_pi_format": 0 00:15:29.019 } 00:15:29.019 }, 00:15:29.019 { 00:15:29.019 "method": "bdev_wait_for_examine" 00:15:29.019 } 00:15:29.019 ] 00:15:29.019 }, 00:15:29.019 { 00:15:29.019 "subsystem": "nbd", 00:15:29.019 "config": [] 00:15:29.019 }, 00:15:29.019 { 00:15:29.019 "subsystem": "scheduler", 00:15:29.019 "config": [ 00:15:29.019 { 00:15:29.019 "method": "framework_set_scheduler", 00:15:29.019 "params": { 00:15:29.019 "name": "static" 00:15:29.019 } 00:15:29.019 } 00:15:29.019 ] 00:15:29.019 }, 00:15:29.019 { 00:15:29.019 "subsystem": "nvmf", 00:15:29.019 "config": [ 00:15:29.019 { 00:15:29.019 "method": "nvmf_set_config", 00:15:29.019 "params": { 00:15:29.019 "discovery_filter": "match_any", 00:15:29.019 "admin_cmd_passthru": { 00:15:29.019 "identify_ctrlr": false 00:15:29.019 }, 00:15:29.019 "dhchap_digests": [ 00:15:29.019 "sha256", 00:15:29.019 "sha384", 00:15:29.019 "sha512" 00:15:29.019 ], 00:15:29.019 "dhchap_dhgroups": [ 00:15:29.019 "null", 00:15:29.020 "ffdhe2048", 00:15:29.020 "ffdhe3072", 00:15:29.020 "ffdhe4096", 00:15:29.020 "ffdhe6144", 00:15:29.020 "ffdhe8192" 00:15:29.020 ] 00:15:29.020 } 00:15:29.020 }, 00:15:29.020 { 00:15:29.020 "method": "nvmf_set_max_subsystems", 00:15:29.020 "params": { 00:15:29.020 "max_subsystems": 1024 00:15:29.020 } 00:15:29.020 }, 00:15:29.020 { 00:15:29.020 "method": "nvmf_set_crdt", 00:15:29.020 "params": { 00:15:29.020 "crdt1": 0, 00:15:29.020 "crdt2": 0, 00:15:29.020 "crdt3": 0 00:15:29.020 } 00:15:29.020 }, 00:15:29.020 { 00:15:29.020 "method": "nvmf_create_transport", 00:15:29.020 "params": { 00:15:29.020 "trtype": "TCP", 00:15:29.020 "max_queue_depth": 128, 00:15:29.020 "max_io_qpairs_per_ctrlr": 127, 00:15:29.020 "in_capsule_data_size": 4096, 00:15:29.020 "max_io_size": 131072, 00:15:29.020 "io_unit_size": 131072, 00:15:29.020 "max_aq_depth": 128, 00:15:29.020 "num_shared_buffers": 511, 00:15:29.020 "buf_cache_size": 4294967295, 00:15:29.020 "dif_insert_or_strip": false, 00:15:29.020 "zcopy": false, 00:15:29.020 "c2h_success": false, 00:15:29.020 "sock_priority": 0, 00:15:29.020 "abort_timeout_sec": 1, 00:15:29.020 "ack_timeout": 0, 00:15:29.020 "data_wr_pool_size": 0 00:15:29.020 } 00:15:29.020 }, 00:15:29.020 { 00:15:29.020 "method": "nvmf_create_subsystem", 00:15:29.020 "params": { 00:15:29.020 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:29.020 "allow_any_host": false, 00:15:29.020 "serial_number": "00000000000000000000", 00:15:29.020 "model_number": "SPDK bdev Controller", 00:15:29.020 "max_namespaces": 32, 00:15:29.020 "min_cntlid": 1, 00:15:29.020 "max_cntlid": 65519, 00:15:29.020 "ana_reporting": false 00:15:29.020 } 00:15:29.020 }, 00:15:29.020 { 00:15:29.020 "method": "nvmf_subsystem_add_host", 00:15:29.020 "params": { 00:15:29.020 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:29.020 "host": "nqn.2016-06.io.spdk:host1", 00:15:29.020 "psk": "key0" 00:15:29.020 } 00:15:29.020 }, 00:15:29.020 { 00:15:29.020 "method": "nvmf_subsystem_add_ns", 00:15:29.020 "params": { 00:15:29.020 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:29.020 "namespace": { 00:15:29.020 "nsid": 1, 00:15:29.020 "bdev_name": "malloc0", 00:15:29.020 "nguid": "3DC680D6256E4D809E3DE9BB55125D1B", 00:15:29.020 "uuid": "3dc680d6-256e-4d80-9e3d-e9bb55125d1b", 00:15:29.020 "no_auto_visible": false 00:15:29.020 } 00:15:29.020 } 00:15:29.020 }, 00:15:29.020 { 00:15:29.020 "method": "nvmf_subsystem_add_listener", 00:15:29.020 "params": { 00:15:29.020 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:29.020 "listen_address": { 00:15:29.020 "trtype": "TCP", 00:15:29.020 "adrfam": "IPv4", 00:15:29.020 "traddr": "10.0.0.3", 00:15:29.020 "trsvcid": "4420" 00:15:29.020 }, 00:15:29.020 "secure_channel": false, 00:15:29.020 "sock_impl": "ssl" 00:15:29.020 } 00:15:29.020 } 00:15:29.020 ] 00:15:29.020 } 00:15:29.020 ] 00:15:29.020 }' 00:15:29.020 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:29.020 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:29.020 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84391 00:15:29.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.020 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84391 00:15:29.020 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84391 ']' 00:15:29.020 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.020 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:29.020 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.020 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:29.020 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:29.279 [2024-12-08 18:32:46.956694] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:29.279 [2024-12-08 18:32:46.956942] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.279 [2024-12-08 18:32:47.081539] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.279 [2024-12-08 18:32:47.149219] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.279 [2024-12-08 18:32:47.149472] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.279 [2024-12-08 18:32:47.149632] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.279 [2024-12-08 18:32:47.149779] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.279 [2024-12-08 18:32:47.149815] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.279 [2024-12-08 18:32:47.149974] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.538 [2024-12-08 18:32:47.315744] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:29.538 [2024-12-08 18:32:47.392258] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:29.538 [2024-12-08 18:32:47.439342] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:29.538 [2024-12-08 18:32:47.439573] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:30.108 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:30.108 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:30.108 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:30.108 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:30.108 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:30.108 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.108 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=84423 00:15:30.108 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 84423 /var/tmp/bdevperf.sock 00:15:30.108 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84423 ']' 00:15:30.108 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:30.108 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:30.108 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:30.108 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:30.108 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.108 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:30.108 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:15:30.108 "subsystems": [ 00:15:30.108 { 00:15:30.108 "subsystem": "keyring", 00:15:30.108 "config": [ 00:15:30.108 { 00:15:30.108 "method": "keyring_file_add_key", 00:15:30.108 "params": { 00:15:30.108 "name": "key0", 00:15:30.109 "path": "/tmp/tmp.BIPDQsMCYr" 00:15:30.109 } 00:15:30.109 } 00:15:30.109 ] 00:15:30.109 }, 00:15:30.109 { 00:15:30.109 "subsystem": "iobuf", 00:15:30.109 "config": [ 00:15:30.109 { 00:15:30.109 "method": "iobuf_set_options", 00:15:30.109 "params": { 00:15:30.109 "small_pool_count": 8192, 00:15:30.109 "large_pool_count": 1024, 00:15:30.109 "small_bufsize": 8192, 00:15:30.109 "large_bufsize": 135168 00:15:30.109 } 00:15:30.109 } 00:15:30.109 ] 00:15:30.109 }, 00:15:30.109 { 00:15:30.109 "subsystem": "sock", 00:15:30.109 "config": [ 00:15:30.109 { 00:15:30.109 "method": "sock_set_default_impl", 00:15:30.109 "params": { 00:15:30.109 "impl_name": "uring" 00:15:30.109 } 00:15:30.109 }, 00:15:30.109 { 00:15:30.109 "method": "sock_impl_set_options", 00:15:30.109 "params": { 00:15:30.109 "impl_name": "ssl", 00:15:30.109 "recv_buf_size": 4096, 00:15:30.109 "send_buf_size": 4096, 00:15:30.109 "enable_recv_pipe": true, 00:15:30.109 "enable_quickack": false, 00:15:30.109 "enable_placement_id": 0, 00:15:30.109 "enable_zerocopy_send_server": true, 00:15:30.109 "enable_zerocopy_send_client": false, 00:15:30.109 "zerocopy_threshold": 0, 00:15:30.109 "tls_version": 0, 00:15:30.109 "enable_ktls": false 00:15:30.109 } 00:15:30.109 }, 00:15:30.109 { 00:15:30.109 "method": "sock_impl_set_options", 00:15:30.109 "params": { 00:15:30.109 "impl_name": "posix", 00:15:30.109 "recv_buf_size": 2097152, 00:15:30.109 "send_buf_size": 2097152, 00:15:30.109 "enable_recv_pipe": true, 00:15:30.109 "enable_quickack": false, 00:15:30.109 "enable_placement_id": 0, 00:15:30.109 "enable_zerocopy_send_server": true, 00:15:30.109 "enable_zerocopy_send_client": false, 00:15:30.109 "zerocopy_threshold": 0, 00:15:30.109 "tls_version": 0, 00:15:30.109 "enable_ktls": false 00:15:30.109 } 00:15:30.109 }, 00:15:30.109 { 00:15:30.109 "method": "sock_impl_set_options", 00:15:30.109 "params": { 00:15:30.109 "impl_name": "uring", 00:15:30.109 "recv_buf_size": 2097152, 00:15:30.109 "send_buf_size": 2097152, 00:15:30.109 "enable_recv_pipe": true, 00:15:30.109 "enable_quickack": false, 00:15:30.109 "enable_placement_id": 0, 00:15:30.109 "enable_zerocopy_send_server": false, 00:15:30.109 "enable_zerocopy_send_client": false, 00:15:30.109 "zerocopy_threshold": 0, 00:15:30.109 "tls_version": 0, 00:15:30.109 "enable_ktls": false 00:15:30.109 } 00:15:30.109 } 00:15:30.109 ] 00:15:30.109 }, 00:15:30.109 { 00:15:30.109 "subsystem": "vmd", 00:15:30.109 "config": [] 00:15:30.109 }, 00:15:30.109 { 00:15:30.109 "subsystem": "accel", 00:15:30.109 "config": [ 00:15:30.109 { 00:15:30.109 "method": "accel_set_options", 00:15:30.109 "params": { 00:15:30.109 "small_cache_size": 128, 00:15:30.109 "large_cache_size": 16, 00:15:30.109 "task_count": 2048, 00:15:30.109 "sequence_count": 2048, 00:15:30.109 "buf_count": 2048 00:15:30.109 } 00:15:30.109 } 00:15:30.109 ] 00:15:30.109 }, 00:15:30.109 { 00:15:30.109 "subsystem": "bdev", 00:15:30.109 "config": [ 00:15:30.109 { 00:15:30.109 "method": "bdev_set_options", 00:15:30.109 "params": { 00:15:30.109 "bdev_io_pool_size": 65535, 00:15:30.109 "bdev_io_cache_size": 256, 00:15:30.109 "bdev_auto_examine": true, 00:15:30.109 "iobuf_small_cache_size": 128, 00:15:30.109 "iobuf_large_cache_size": 16 00:15:30.109 } 00:15:30.109 }, 00:15:30.109 { 00:15:30.109 "method": "bdev_raid_set_options", 00:15:30.109 "params": { 00:15:30.109 "process_window_size_kb": 1024, 00:15:30.109 "process_max_bandwidth_mb_sec": 0 00:15:30.109 } 00:15:30.109 }, 00:15:30.109 { 00:15:30.109 "method": "bdev_iscsi_set_options", 00:15:30.109 "params": { 00:15:30.109 "timeout_sec": 30 00:15:30.109 } 00:15:30.109 }, 00:15:30.109 { 00:15:30.109 "method": "bdev_nvme_set_options", 00:15:30.109 "params": { 00:15:30.109 "action_on_timeout": "none", 00:15:30.109 "timeout_us": 0, 00:15:30.109 "timeout_admin_us": 0, 00:15:30.109 "keep_alive_timeout_ms": 10000, 00:15:30.109 "arbitration_burst": 0, 00:15:30.109 "low_priority_weight": 0, 00:15:30.109 "medium_priority_weight": 0, 00:15:30.109 "high_priority_weight": 0, 00:15:30.109 "nvme_adminq_poll_period_us": 10000, 00:15:30.109 "nvme_ioq_poll_period_us": 0, 00:15:30.109 "io_queue_requests": 512, 00:15:30.109 "delay_cmd_submit": true, 00:15:30.109 "transport_retry_count": 4, 00:15:30.109 "bdev_retry_count": 3, 00:15:30.109 "transport_ack_timeout": 0, 00:15:30.109 "ctrlr_loss_timeout_sec": 0, 00:15:30.109 "reconnect_delay_sec": 0, 00:15:30.109 "fast_io_fail_timeout_sec": 0, 00:15:30.109 "disable_auto_failback": false, 00:15:30.109 "generate_uuids": false, 00:15:30.109 "transport_tos": 0, 00:15:30.109 "nvme_error_stat": false, 00:15:30.109 "rdma_srq_size": 0, 00:15:30.109 "io_path_stat": false, 00:15:30.109 "allow_accel_sequence": false, 00:15:30.109 "rdma_max_cq_size": 0, 00:15:30.109 "rdma_cm_event_timeout_ms": 0, 00:15:30.109 "dhchap_digests": [ 00:15:30.109 "sha256", 00:15:30.109 "sha384", 00:15:30.109 "sha512" 00:15:30.109 ], 00:15:30.109 "dhchap_dhgroups": [ 00:15:30.109 "null", 00:15:30.109 "ffdhe2048", 00:15:30.109 "ffdhe3072", 00:15:30.109 "ffdhe4096", 00:15:30.109 "ffdhe6144", 00:15:30.109 "ffdhe8192" 00:15:30.110 ] 00:15:30.110 } 00:15:30.110 }, 00:15:30.110 { 00:15:30.110 "method": "bdev_nvme_attach_controller", 00:15:30.110 "params": { 00:15:30.110 "name": "nvme0", 00:15:30.110 "trtype": "TCP", 00:15:30.110 "adrfam": "IPv4", 00:15:30.110 "traddr": "10.0.0.3", 00:15:30.110 "trsvcid": "4420", 00:15:30.110 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.110 "prchk_reftag": false, 00:15:30.110 "prchk_guard": false, 00:15:30.110 "ctrlr_loss_timeout_sec": 0, 00:15:30.110 "reconnect_delay_sec": 0, 00:15:30.110 "fast_io_fail_timeout_sec": 0, 00:15:30.110 "psk": "key0", 00:15:30.110 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:30.110 "hdgst": false, 00:15:30.110 "ddgst": false 00:15:30.110 } 00:15:30.110 }, 00:15:30.110 { 00:15:30.110 "method": "bdev_nvme_set_hotplug", 00:15:30.110 "params": { 00:15:30.110 "period_us": 100000, 00:15:30.110 "enable": false 00:15:30.110 } 00:15:30.110 }, 00:15:30.110 { 00:15:30.110 "method": "bdev_enable_histogram", 00:15:30.110 "params": { 00:15:30.110 "name": "nvme0n1", 00:15:30.110 "enable": true 00:15:30.110 } 00:15:30.110 }, 00:15:30.110 { 00:15:30.110 "method": "bdev_wait_for_examine" 00:15:30.110 } 00:15:30.110 ] 00:15:30.110 }, 00:15:30.110 { 00:15:30.110 "subsystem": "nbd", 00:15:30.110 "config": [] 00:15:30.110 } 00:15:30.110 ] 00:15:30.110 }' 00:15:30.110 [2024-12-08 18:32:47.998152] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:30.110 [2024-12-08 18:32:47.998253] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84423 ] 00:15:30.369 [2024-12-08 18:32:48.137639] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.369 [2024-12-08 18:32:48.212639] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.628 [2024-12-08 18:32:48.351512] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:30.628 [2024-12-08 18:32:48.397595] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:31.198 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:31.198 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:31.198 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:31.198 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:15:31.457 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.457 18:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:31.716 Running I/O for 1 seconds... 00:15:32.656 4226.00 IOPS, 16.51 MiB/s 00:15:32.656 Latency(us) 00:15:32.656 [2024-12-08T18:32:50.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.656 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:32.656 Verification LBA range: start 0x0 length 0x2000 00:15:32.656 nvme0n1 : 1.02 4286.87 16.75 0.00 0.00 29548.90 290.44 19303.33 00:15:32.656 [2024-12-08T18:32:50.586Z] =================================================================================================================== 00:15:32.656 [2024-12-08T18:32:50.586Z] Total : 4286.87 16.75 0.00 0.00 29548.90 290.44 19303.33 00:15:32.656 { 00:15:32.656 "results": [ 00:15:32.656 { 00:15:32.656 "job": "nvme0n1", 00:15:32.656 "core_mask": "0x2", 00:15:32.656 "workload": "verify", 00:15:32.656 "status": "finished", 00:15:32.656 "verify_range": { 00:15:32.656 "start": 0, 00:15:32.656 "length": 8192 00:15:32.656 }, 00:15:32.656 "queue_depth": 128, 00:15:32.656 "io_size": 4096, 00:15:32.656 "runtime": 1.015659, 00:15:32.656 "iops": 4286.87187333544, 00:15:32.656 "mibps": 16.745593255216562, 00:15:32.656 "io_failed": 0, 00:15:32.656 "io_timeout": 0, 00:15:32.656 "avg_latency_us": 29548.90182486324, 00:15:32.656 "min_latency_us": 290.44363636363636, 00:15:32.656 "max_latency_us": 19303.33090909091 00:15:32.656 } 00:15:32.656 ], 00:15:32.656 "core_count": 1 00:15:32.656 } 00:15:32.656 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:15:32.656 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:15:32.656 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:32.656 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:15:32.656 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:15:32.656 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:15:32.656 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:32.656 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:15:32.656 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:15:32.656 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:15:32.656 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:32.656 nvmf_trace.0 00:15:32.656 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:15:32.656 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 84423 00:15:32.656 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84423 ']' 00:15:32.656 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84423 00:15:32.656 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:32.656 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:32.656 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84423 00:15:32.656 killing process with pid 84423 00:15:32.656 Received shutdown signal, test time was about 1.000000 seconds 00:15:32.656 00:15:32.656 Latency(us) 00:15:32.656 [2024-12-08T18:32:50.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.656 [2024-12-08T18:32:50.586Z] =================================================================================================================== 00:15:32.656 [2024-12-08T18:32:50.586Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:32.656 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:32.656 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:32.656 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84423' 00:15:32.656 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84423 00:15:32.656 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84423 00:15:32.915 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:32.915 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:32.915 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:15:32.915 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:32.915 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:15:32.915 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:32.915 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:33.175 rmmod nvme_tcp 00:15:33.175 rmmod nvme_fabrics 00:15:33.175 rmmod nvme_keyring 00:15:33.175 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:33.175 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:15:33.175 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:15:33.175 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 84391 ']' 00:15:33.175 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 84391 00:15:33.175 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84391 ']' 00:15:33.175 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84391 00:15:33.175 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:33.175 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:33.175 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84391 00:15:33.175 killing process with pid 84391 00:15:33.175 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:33.175 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:33.175 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84391' 00:15:33.175 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84391 00:15:33.175 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84391 00:15:33.435 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:33.435 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:33.435 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:33.435 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:15:33.435 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:33.435 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:15:33.435 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:15:33.435 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:33.435 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:33.435 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:33.435 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:33.435 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:33.435 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:33.435 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:33.435 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:33.435 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:33.435 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:33.435 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:33.435 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:33.435 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:33.435 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:33.435 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:33.694 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:33.694 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.694 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:33.694 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.694 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:15:33.694 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.1xlFg3faTh /tmp/tmp.WdfcquErKp /tmp/tmp.BIPDQsMCYr 00:15:33.694 ************************************ 00:15:33.694 END TEST nvmf_tls 00:15:33.694 ************************************ 00:15:33.694 00:15:33.694 real 1m28.472s 00:15:33.694 user 2m20.133s 00:15:33.694 sys 0m29.715s 00:15:33.694 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:33.694 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.694 18:32:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:33.694 18:32:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:33.694 18:32:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:33.694 18:32:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:33.694 ************************************ 00:15:33.694 START TEST nvmf_fips 00:15:33.694 ************************************ 00:15:33.694 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:33.694 * Looking for test storage... 00:15:33.694 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:33.694 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:33.694 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:33.695 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:33.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.956 --rc genhtml_branch_coverage=1 00:15:33.956 --rc genhtml_function_coverage=1 00:15:33.956 --rc genhtml_legend=1 00:15:33.956 --rc geninfo_all_blocks=1 00:15:33.956 --rc geninfo_unexecuted_blocks=1 00:15:33.956 00:15:33.956 ' 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:33.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.956 --rc genhtml_branch_coverage=1 00:15:33.956 --rc genhtml_function_coverage=1 00:15:33.956 --rc genhtml_legend=1 00:15:33.956 --rc geninfo_all_blocks=1 00:15:33.956 --rc geninfo_unexecuted_blocks=1 00:15:33.956 00:15:33.956 ' 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:33.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.956 --rc genhtml_branch_coverage=1 00:15:33.956 --rc genhtml_function_coverage=1 00:15:33.956 --rc genhtml_legend=1 00:15:33.956 --rc geninfo_all_blocks=1 00:15:33.956 --rc geninfo_unexecuted_blocks=1 00:15:33.956 00:15:33.956 ' 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:33.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.956 --rc genhtml_branch_coverage=1 00:15:33.956 --rc genhtml_function_coverage=1 00:15:33.956 --rc genhtml_legend=1 00:15:33.956 --rc geninfo_all_blocks=1 00:15:33.956 --rc geninfo_unexecuted_blocks=1 00:15:33.956 00:15:33.956 ' 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:33.956 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:33.957 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:15:33.957 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:15:33.958 Error setting digest 00:15:33.958 40F29C7F327F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:15:33.958 40F29C7F327F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:33.958 Cannot find device "nvmf_init_br" 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:33.958 Cannot find device "nvmf_init_br2" 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:33.958 Cannot find device "nvmf_tgt_br" 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:15:33.958 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:34.218 Cannot find device "nvmf_tgt_br2" 00:15:34.218 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:15:34.218 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:34.218 Cannot find device "nvmf_init_br" 00:15:34.218 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:15:34.218 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:34.218 Cannot find device "nvmf_init_br2" 00:15:34.218 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:15:34.218 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:34.218 Cannot find device "nvmf_tgt_br" 00:15:34.218 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:15:34.218 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:34.218 Cannot find device "nvmf_tgt_br2" 00:15:34.218 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:15:34.218 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:34.218 Cannot find device "nvmf_br" 00:15:34.218 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:15:34.218 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:34.218 Cannot find device "nvmf_init_if" 00:15:34.218 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:15:34.218 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:34.218 Cannot find device "nvmf_init_if2" 00:15:34.218 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:15:34.218 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:34.218 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:34.218 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:15:34.218 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:34.218 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:34.218 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:15:34.218 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:34.218 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:34.218 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:34.218 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:34.218 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:34.218 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:34.218 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:34.218 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:34.218 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:34.218 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:34.218 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:34.218 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:34.218 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:34.218 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:34.218 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:34.218 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:34.218 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:34.218 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:34.218 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:34.218 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:34.218 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:34.218 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:34.218 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:34.218 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:34.218 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:34.478 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:34.478 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:15:34.478 00:15:34.478 --- 10.0.0.3 ping statistics --- 00:15:34.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.478 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:34.478 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:34.478 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:15:34.478 00:15:34.478 --- 10.0.0.4 ping statistics --- 00:15:34.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.478 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:34.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:34.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:34.478 00:15:34.478 --- 10.0.0.1 ping statistics --- 00:15:34.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.478 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:34.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:34.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:15:34.478 00:15:34.478 --- 10.0.0.2 ping statistics --- 00:15:34.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.478 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@457 -- # return 0 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=84740 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 84740 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 84740 ']' 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:34.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:34.478 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:34.478 [2024-12-08 18:32:52.309824] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:34.478 [2024-12-08 18:32:52.309915] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.738 [2024-12-08 18:32:52.451169] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.738 [2024-12-08 18:32:52.521122] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.738 [2024-12-08 18:32:52.521453] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.738 [2024-12-08 18:32:52.521479] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.738 [2024-12-08 18:32:52.521491] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.738 [2024-12-08 18:32:52.521500] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.738 [2024-12-08 18:32:52.521535] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.738 [2024-12-08 18:32:52.577497] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:35.675 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:35.675 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:15:35.675 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:35.675 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:35.675 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:35.675 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:35.675 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:15:35.675 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:35.675 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:15:35.675 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.4nR 00:15:35.675 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:35.675 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.4nR 00:15:35.675 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.4nR 00:15:35.675 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.4nR 00:15:35.675 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:35.935 [2024-12-08 18:32:53.666545] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:35.935 [2024-12-08 18:32:53.682499] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:35.935 [2024-12-08 18:32:53.682719] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:35.935 malloc0 00:15:35.935 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:35.935 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=84782 00:15:35.935 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 84782 /var/tmp/bdevperf.sock 00:15:35.935 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 84782 ']' 00:15:35.935 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:35.935 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:35.935 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:35.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:35.935 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:35.935 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:35.935 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:36.195 [2024-12-08 18:32:53.865730] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:36.195 [2024-12-08 18:32:53.865845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84782 ] 00:15:36.195 [2024-12-08 18:32:54.011432] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.195 [2024-12-08 18:32:54.088861] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.454 [2024-12-08 18:32:54.147634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:37.023 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:37.023 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:15:37.023 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.4nR 00:15:37.294 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:37.557 [2024-12-08 18:32:55.288476] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:37.557 TLSTESTn1 00:15:37.557 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:37.816 Running I/O for 10 seconds... 00:15:39.693 4064.00 IOPS, 15.88 MiB/s [2024-12-08T18:32:58.561Z] 4198.50 IOPS, 16.40 MiB/s [2024-12-08T18:32:59.939Z] 4245.00 IOPS, 16.58 MiB/s [2024-12-08T18:33:00.875Z] 4258.25 IOPS, 16.63 MiB/s [2024-12-08T18:33:01.814Z] 4271.60 IOPS, 16.69 MiB/s [2024-12-08T18:33:02.801Z] 4285.50 IOPS, 16.74 MiB/s [2024-12-08T18:33:03.756Z] 4284.00 IOPS, 16.73 MiB/s [2024-12-08T18:33:04.692Z] 4271.62 IOPS, 16.69 MiB/s [2024-12-08T18:33:05.629Z] 4257.00 IOPS, 16.63 MiB/s [2024-12-08T18:33:05.629Z] 4250.00 IOPS, 16.60 MiB/s 00:15:47.699 Latency(us) 00:15:47.699 [2024-12-08T18:33:05.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.699 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:47.699 Verification LBA range: start 0x0 length 0x2000 00:15:47.699 TLSTESTn1 : 10.02 4255.81 16.62 0.00 0.00 30023.87 4766.25 23831.27 00:15:47.699 [2024-12-08T18:33:05.629Z] =================================================================================================================== 00:15:47.699 [2024-12-08T18:33:05.629Z] Total : 4255.81 16.62 0.00 0.00 30023.87 4766.25 23831.27 00:15:47.699 { 00:15:47.699 "results": [ 00:15:47.699 { 00:15:47.699 "job": "TLSTESTn1", 00:15:47.699 "core_mask": "0x4", 00:15:47.699 "workload": "verify", 00:15:47.699 "status": "finished", 00:15:47.699 "verify_range": { 00:15:47.699 "start": 0, 00:15:47.699 "length": 8192 00:15:47.699 }, 00:15:47.699 "queue_depth": 128, 00:15:47.699 "io_size": 4096, 00:15:47.699 "runtime": 10.01524, 00:15:47.699 "iops": 4255.81413925178, 00:15:47.699 "mibps": 16.624273981452266, 00:15:47.699 "io_failed": 0, 00:15:47.699 "io_timeout": 0, 00:15:47.699 "avg_latency_us": 30023.8679159566, 00:15:47.699 "min_latency_us": 4766.254545454545, 00:15:47.699 "max_latency_us": 23831.272727272728 00:15:47.699 } 00:15:47.699 ], 00:15:47.699 "core_count": 1 00:15:47.699 } 00:15:47.699 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:47.699 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:47.699 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:15:47.699 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:15:47.699 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:15:47.699 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:47.699 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:15:47.699 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:15:47.699 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:15:47.699 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:47.699 nvmf_trace.0 00:15:47.958 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:15:47.958 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 84782 00:15:47.958 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 84782 ']' 00:15:47.958 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 84782 00:15:47.958 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:15:47.958 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:47.958 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84782 00:15:47.958 killing process with pid 84782 00:15:47.958 Received shutdown signal, test time was about 10.000000 seconds 00:15:47.958 00:15:47.958 Latency(us) 00:15:47.958 [2024-12-08T18:33:05.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.958 [2024-12-08T18:33:05.888Z] =================================================================================================================== 00:15:47.958 [2024-12-08T18:33:05.888Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:47.958 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:47.958 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:47.958 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84782' 00:15:47.958 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 84782 00:15:47.958 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 84782 00:15:48.216 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:48.216 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:48.216 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:15:48.216 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:48.216 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:15:48.216 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:48.216 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:48.216 rmmod nvme_tcp 00:15:48.216 rmmod nvme_fabrics 00:15:48.216 rmmod nvme_keyring 00:15:48.216 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:48.216 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:15:48.216 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:15:48.216 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 84740 ']' 00:15:48.216 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 84740 00:15:48.216 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 84740 ']' 00:15:48.216 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 84740 00:15:48.216 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:15:48.216 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:48.216 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84740 00:15:48.216 killing process with pid 84740 00:15:48.216 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:48.216 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:48.216 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84740' 00:15:48.216 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 84740 00:15:48.216 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 84740 00:15:48.474 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:48.474 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:48.474 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:48.474 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:15:48.474 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:48.474 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:15:48.474 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:15:48.474 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:48.474 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:48.474 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:48.474 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:48.474 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:48.474 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:48.474 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:48.474 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:48.474 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:48.474 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:48.474 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:48.732 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:48.732 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:48.732 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:48.732 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:48.732 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:48.732 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.732 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:48.732 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.732 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:15:48.732 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.4nR 00:15:48.732 ************************************ 00:15:48.732 END TEST nvmf_fips 00:15:48.732 ************************************ 00:15:48.732 00:15:48.732 real 0m15.086s 00:15:48.732 user 0m21.021s 00:15:48.732 sys 0m5.847s 00:15:48.732 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:48.732 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:48.732 18:33:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:48.732 18:33:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:48.732 18:33:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:48.732 18:33:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:48.732 ************************************ 00:15:48.732 START TEST nvmf_control_msg_list 00:15:48.732 ************************************ 00:15:48.732 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:48.992 * Looking for test storage... 00:15:48.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:48.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.992 --rc genhtml_branch_coverage=1 00:15:48.992 --rc genhtml_function_coverage=1 00:15:48.992 --rc genhtml_legend=1 00:15:48.992 --rc geninfo_all_blocks=1 00:15:48.992 --rc geninfo_unexecuted_blocks=1 00:15:48.992 00:15:48.992 ' 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:48.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.992 --rc genhtml_branch_coverage=1 00:15:48.992 --rc genhtml_function_coverage=1 00:15:48.992 --rc genhtml_legend=1 00:15:48.992 --rc geninfo_all_blocks=1 00:15:48.992 --rc geninfo_unexecuted_blocks=1 00:15:48.992 00:15:48.992 ' 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:48.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.992 --rc genhtml_branch_coverage=1 00:15:48.992 --rc genhtml_function_coverage=1 00:15:48.992 --rc genhtml_legend=1 00:15:48.992 --rc geninfo_all_blocks=1 00:15:48.992 --rc geninfo_unexecuted_blocks=1 00:15:48.992 00:15:48.992 ' 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:48.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.992 --rc genhtml_branch_coverage=1 00:15:48.992 --rc genhtml_function_coverage=1 00:15:48.992 --rc genhtml_legend=1 00:15:48.992 --rc geninfo_all_blocks=1 00:15:48.992 --rc geninfo_unexecuted_blocks=1 00:15:48.992 00:15:48.992 ' 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:48.992 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:48.992 Cannot find device "nvmf_init_br" 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:48.992 Cannot find device "nvmf_init_br2" 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:48.992 Cannot find device "nvmf_tgt_br" 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:48.992 Cannot find device "nvmf_tgt_br2" 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:15:48.992 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:48.993 Cannot find device "nvmf_init_br" 00:15:48.993 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:15:48.993 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:48.993 Cannot find device "nvmf_init_br2" 00:15:48.993 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:15:48.993 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:48.993 Cannot find device "nvmf_tgt_br" 00:15:48.993 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:15:48.993 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:48.993 Cannot find device "nvmf_tgt_br2" 00:15:48.993 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:15:48.993 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:49.251 Cannot find device "nvmf_br" 00:15:49.251 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:15:49.251 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:49.251 Cannot find device "nvmf_init_if" 00:15:49.251 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:15:49.251 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:49.251 Cannot find device "nvmf_init_if2" 00:15:49.251 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:15:49.251 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:49.251 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:49.251 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:15:49.251 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:49.251 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:49.251 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:15:49.251 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:49.251 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:49.251 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:49.251 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:49.251 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:49.251 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:49.251 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:49.251 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:49.251 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:49.251 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:49.251 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:49.251 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:49.251 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:49.251 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:49.251 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:49.251 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:49.251 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:49.251 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:49.251 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:49.251 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:49.251 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:49.251 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:49.251 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:49.251 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:49.251 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:49.251 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:49.511 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:49.511 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:15:49.511 00:15:49.511 --- 10.0.0.3 ping statistics --- 00:15:49.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.511 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:49.511 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:49.511 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:15:49.511 00:15:49.511 --- 10.0.0.4 ping statistics --- 00:15:49.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.511 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:49.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:15:49.511 00:15:49.511 --- 10.0.0.1 ping statistics --- 00:15:49.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.511 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:49.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:15:49.511 00:15:49.511 --- 10.0.0.2 ping statistics --- 00:15:49.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.511 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@457 -- # return 0 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:49.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=85170 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 85170 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 85170 ']' 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:49.511 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:49.511 [2024-12-08 18:33:07.313283] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:49.511 [2024-12-08 18:33:07.313376] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.770 [2024-12-08 18:33:07.453680] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.770 [2024-12-08 18:33:07.531927] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.771 [2024-12-08 18:33:07.531989] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.771 [2024-12-08 18:33:07.532004] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.771 [2024-12-08 18:33:07.532015] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.771 [2024-12-08 18:33:07.532024] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.771 [2024-12-08 18:33:07.532061] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.771 [2024-12-08 18:33:07.595506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:49.771 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:49.771 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:15:49.771 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:49.771 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:49.771 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:50.030 [2024-12-08 18:33:07.712574] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:50.030 Malloc0 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:50.030 [2024-12-08 18:33:07.759474] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=85195 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=85196 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=85197 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:50.030 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 85195 00:15:50.030 [2024-12-08 18:33:07.937799] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:50.030 [2024-12-08 18:33:07.948297] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:50.030 [2024-12-08 18:33:07.948824] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:51.409 Initializing NVMe Controllers 00:15:51.409 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:51.409 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:15:51.409 Initialization complete. Launching workers. 00:15:51.409 ======================================================== 00:15:51.409 Latency(us) 00:15:51.409 Device Information : IOPS MiB/s Average min max 00:15:51.409 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3717.00 14.52 268.66 126.52 950.96 00:15:51.409 ======================================================== 00:15:51.409 Total : 3717.00 14.52 268.66 126.52 950.96 00:15:51.409 00:15:51.409 Initializing NVMe Controllers 00:15:51.409 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:51.409 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:15:51.409 Initialization complete. Launching workers. 00:15:51.409 ======================================================== 00:15:51.409 Latency(us) 00:15:51.409 Device Information : IOPS MiB/s Average min max 00:15:51.409 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3706.97 14.48 269.35 144.43 879.25 00:15:51.409 ======================================================== 00:15:51.409 Total : 3706.97 14.48 269.35 144.43 879.25 00:15:51.409 00:15:51.409 Initializing NVMe Controllers 00:15:51.409 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:51.409 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:15:51.409 Initialization complete. Launching workers. 00:15:51.409 ======================================================== 00:15:51.409 Latency(us) 00:15:51.410 Device Information : IOPS MiB/s Average min max 00:15:51.410 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3719.91 14.53 268.45 150.67 925.27 00:15:51.410 ======================================================== 00:15:51.410 Total : 3719.91 14.53 268.45 150.67 925.27 00:15:51.410 00:15:51.410 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 85196 00:15:51.410 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 85197 00:15:51.410 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:51.410 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:15:51.410 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:51.410 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:15:51.410 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:51.410 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:15:51.410 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:51.410 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:51.410 rmmod nvme_tcp 00:15:51.410 rmmod nvme_fabrics 00:15:51.410 rmmod nvme_keyring 00:15:51.410 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:51.410 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:15:51.410 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:15:51.410 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 85170 ']' 00:15:51.410 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 85170 00:15:51.410 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 85170 ']' 00:15:51.410 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 85170 00:15:51.410 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:15:51.410 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:51.410 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85170 00:15:51.410 killing process with pid 85170 00:15:51.410 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:51.410 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:51.410 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85170' 00:15:51.410 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 85170 00:15:51.410 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 85170 00:15:51.669 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:51.669 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:51.669 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:51.669 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:15:51.669 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:15:51.669 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:15:51.669 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:51.669 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:51.669 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:51.669 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:51.669 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:51.669 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:51.669 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:51.669 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:51.669 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:51.669 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:51.669 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:51.669 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:51.669 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:51.669 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:51.669 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:51.669 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:51.929 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:51.929 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.929 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:51.929 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.929 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:15:51.929 ************************************ 00:15:51.929 END TEST nvmf_control_msg_list 00:15:51.929 ************************************ 00:15:51.929 00:15:51.929 real 0m3.059s 00:15:51.929 user 0m4.835s 00:15:51.929 sys 0m1.382s 00:15:51.929 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:51.929 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:51.929 18:33:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:51.929 18:33:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:51.929 18:33:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:51.929 18:33:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:51.929 ************************************ 00:15:51.929 START TEST nvmf_wait_for_buf 00:15:51.929 ************************************ 00:15:51.929 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:51.929 * Looking for test storage... 00:15:51.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:51.929 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:51.929 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:15:51.929 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:52.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.189 --rc genhtml_branch_coverage=1 00:15:52.189 --rc genhtml_function_coverage=1 00:15:52.189 --rc genhtml_legend=1 00:15:52.189 --rc geninfo_all_blocks=1 00:15:52.189 --rc geninfo_unexecuted_blocks=1 00:15:52.189 00:15:52.189 ' 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:52.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.189 --rc genhtml_branch_coverage=1 00:15:52.189 --rc genhtml_function_coverage=1 00:15:52.189 --rc genhtml_legend=1 00:15:52.189 --rc geninfo_all_blocks=1 00:15:52.189 --rc geninfo_unexecuted_blocks=1 00:15:52.189 00:15:52.189 ' 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:52.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.189 --rc genhtml_branch_coverage=1 00:15:52.189 --rc genhtml_function_coverage=1 00:15:52.189 --rc genhtml_legend=1 00:15:52.189 --rc geninfo_all_blocks=1 00:15:52.189 --rc geninfo_unexecuted_blocks=1 00:15:52.189 00:15:52.189 ' 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:52.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.189 --rc genhtml_branch_coverage=1 00:15:52.189 --rc genhtml_function_coverage=1 00:15:52.189 --rc genhtml_legend=1 00:15:52.189 --rc geninfo_all_blocks=1 00:15:52.189 --rc geninfo_unexecuted_blocks=1 00:15:52.189 00:15:52.189 ' 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.189 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:52.190 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:52.190 Cannot find device "nvmf_init_br" 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:52.190 Cannot find device "nvmf_init_br2" 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:52.190 Cannot find device "nvmf_tgt_br" 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:52.190 Cannot find device "nvmf_tgt_br2" 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:52.190 Cannot find device "nvmf_init_br" 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:15:52.190 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:52.190 Cannot find device "nvmf_init_br2" 00:15:52.190 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:15:52.190 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:52.190 Cannot find device "nvmf_tgt_br" 00:15:52.190 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:15:52.190 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:52.190 Cannot find device "nvmf_tgt_br2" 00:15:52.190 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:15:52.190 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:52.190 Cannot find device "nvmf_br" 00:15:52.190 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:15:52.190 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:52.190 Cannot find device "nvmf_init_if" 00:15:52.190 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:15:52.190 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:52.190 Cannot find device "nvmf_init_if2" 00:15:52.190 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:15:52.190 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:52.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.190 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:15:52.190 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:52.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.190 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:15:52.190 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:52.190 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:52.190 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:52.190 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:52.450 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:52.450 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:15:52.450 00:15:52.450 --- 10.0.0.3 ping statistics --- 00:15:52.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.450 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:52.450 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:52.450 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:15:52.450 00:15:52.450 --- 10.0.0.4 ping statistics --- 00:15:52.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.450 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:52.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:52.450 00:15:52.450 --- 10.0.0.1 ping statistics --- 00:15:52.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.450 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:52.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:15:52.450 00:15:52.450 --- 10.0.0.2 ping statistics --- 00:15:52.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.450 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@457 -- # return 0 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=85440 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:52.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 85440 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 85440 ']' 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.450 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:52.451 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:52.710 [2024-12-08 18:33:10.410157] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:52.710 [2024-12-08 18:33:10.410251] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.710 [2024-12-08 18:33:10.549096] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.710 [2024-12-08 18:33:10.620135] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.710 [2024-12-08 18:33:10.620441] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.710 [2024-12-08 18:33:10.620462] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.710 [2024-12-08 18:33:10.620471] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.710 [2024-12-08 18:33:10.620477] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.710 [2024-12-08 18:33:10.620514] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.969 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:52.969 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:15:52.969 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:52.969 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:52.969 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:52.969 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:52.969 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:52.969 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:52.969 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:15:52.969 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.969 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:52.969 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.969 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:15:52.969 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.969 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:52.969 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.969 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:15:52.969 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.969 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:52.969 [2024-12-08 18:33:10.761348] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:52.969 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.969 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:52.969 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.969 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:52.970 Malloc0 00:15:52.970 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.970 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:15:52.970 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.970 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:52.970 [2024-12-08 18:33:10.821353] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:52.970 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.970 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:15:52.970 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.970 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:52.970 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.970 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:52.970 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.970 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:52.970 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.970 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:52.970 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.970 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:52.970 [2024-12-08 18:33:10.849463] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:52.970 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.970 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:53.229 [2024-12-08 18:33:11.036516] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:54.605 Initializing NVMe Controllers 00:15:54.605 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:54.605 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:15:54.605 Initialization complete. Launching workers. 00:15:54.605 ======================================================== 00:15:54.605 Latency(us) 00:15:54.605 Device Information : IOPS MiB/s Average min max 00:15:54.605 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 502.50 62.81 7960.37 5970.77 11029.54 00:15:54.605 ======================================================== 00:15:54.605 Total : 502.50 62.81 7960.37 5970.77 11029.54 00:15:54.605 00:15:54.605 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:15:54.605 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:15:54.605 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.605 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:54.605 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.605 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4788 00:15:54.605 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4788 -eq 0 ]] 00:15:54.605 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:54.605 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:15:54.605 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:54.605 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:15:54.605 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:54.605 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:15:54.605 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:54.605 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:54.605 rmmod nvme_tcp 00:15:54.605 rmmod nvme_fabrics 00:15:54.605 rmmod nvme_keyring 00:15:54.605 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:54.605 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:15:54.605 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:15:54.605 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 85440 ']' 00:15:54.605 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 85440 00:15:54.605 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 85440 ']' 00:15:54.605 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 85440 00:15:54.605 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:15:54.606 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:54.606 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85440 00:15:54.606 killing process with pid 85440 00:15:54.606 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:54.606 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:54.606 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85440' 00:15:54.606 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 85440 00:15:54.606 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 85440 00:15:54.865 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:54.865 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:54.865 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:54.865 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:15:54.865 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:15:54.865 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:54.865 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:15:54.865 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:54.865 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:54.865 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:54.865 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:54.865 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:54.865 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:55.124 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:55.124 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:55.124 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:55.124 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:55.124 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:55.124 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:55.124 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:55.124 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:55.124 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:55.124 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:55.124 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.124 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:55.124 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.124 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:15:55.124 00:15:55.124 real 0m3.284s 00:15:55.124 user 0m2.586s 00:15:55.124 sys 0m0.815s 00:15:55.124 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:55.124 ************************************ 00:15:55.124 END TEST nvmf_wait_for_buf 00:15:55.124 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:55.124 ************************************ 00:15:55.124 18:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:15:55.124 18:33:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:15:55.124 18:33:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:55.124 18:33:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:55.124 18:33:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:55.124 ************************************ 00:15:55.124 START TEST nvmf_fuzz 00:15:55.124 ************************************ 00:15:55.124 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:15:55.385 * Looking for test storage... 00:15:55.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:55.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.385 --rc genhtml_branch_coverage=1 00:15:55.385 --rc genhtml_function_coverage=1 00:15:55.385 --rc genhtml_legend=1 00:15:55.385 --rc geninfo_all_blocks=1 00:15:55.385 --rc geninfo_unexecuted_blocks=1 00:15:55.385 00:15:55.385 ' 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:55.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.385 --rc genhtml_branch_coverage=1 00:15:55.385 --rc genhtml_function_coverage=1 00:15:55.385 --rc genhtml_legend=1 00:15:55.385 --rc geninfo_all_blocks=1 00:15:55.385 --rc geninfo_unexecuted_blocks=1 00:15:55.385 00:15:55.385 ' 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:55.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.385 --rc genhtml_branch_coverage=1 00:15:55.385 --rc genhtml_function_coverage=1 00:15:55.385 --rc genhtml_legend=1 00:15:55.385 --rc geninfo_all_blocks=1 00:15:55.385 --rc geninfo_unexecuted_blocks=1 00:15:55.385 00:15:55.385 ' 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:55.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.385 --rc genhtml_branch_coverage=1 00:15:55.385 --rc genhtml_function_coverage=1 00:15:55.385 --rc genhtml_legend=1 00:15:55.385 --rc geninfo_all_blocks=1 00:15:55.385 --rc geninfo_unexecuted_blocks=1 00:15:55.385 00:15:55.385 ' 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.385 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:55.386 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:55.386 Cannot find device "nvmf_init_br" 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:55.386 Cannot find device "nvmf_init_br2" 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:15:55.386 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:55.647 Cannot find device "nvmf_tgt_br" 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:55.647 Cannot find device "nvmf_tgt_br2" 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:55.647 Cannot find device "nvmf_init_br" 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:55.647 Cannot find device "nvmf_init_br2" 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:55.647 Cannot find device "nvmf_tgt_br" 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:55.647 Cannot find device "nvmf_tgt_br2" 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:55.647 Cannot find device "nvmf_br" 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:55.647 Cannot find device "nvmf_init_if" 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:55.647 Cannot find device "nvmf_init_if2" 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:55.647 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:55.647 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:55.647 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:55.907 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:55.907 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:55.907 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:55.907 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:55.907 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:55.907 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:55.907 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:55.907 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:55.907 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:55.908 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:55.908 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:15:55.908 00:15:55.908 --- 10.0.0.3 ping statistics --- 00:15:55.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.908 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:15:55.908 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:55.908 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:55.908 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:15:55.908 00:15:55.908 --- 10.0.0.4 ping statistics --- 00:15:55.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.908 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:55.908 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:55.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:55.908 00:15:55.908 --- 10.0.0.1 ping statistics --- 00:15:55.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.908 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:55.908 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:55.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:15:55.908 00:15:55.908 --- 10.0.0.2 ping statistics --- 00:15:55.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.908 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:15:55.908 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.908 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@457 -- # return 0 00:15:55.908 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:55.908 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.908 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:55.908 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:55.908 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.908 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:55.908 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:55.908 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:55.908 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=85699 00:15:55.908 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:55.908 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 85699 00:15:55.908 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 85699 ']' 00:15:55.908 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.908 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:55.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.908 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.908 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:55.908 18:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.477 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:56.477 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:56.477 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:56.477 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.477 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.477 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.477 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:15:56.477 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.477 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.477 Malloc0 00:15:56.477 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.477 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:56.477 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.477 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.477 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.477 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:56.477 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.477 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.477 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.477 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:56.477 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.477 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.477 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.477 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:15:56.477 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:15:56.736 Shutting down the fuzz application 00:15:56.737 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:15:56.996 Shutting down the fuzz application 00:15:56.996 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:56.996 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.996 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.996 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.996 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:56.996 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:15:56.996 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:56.996 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:15:56.996 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:56.996 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:15:56.996 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:56.996 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:56.996 rmmod nvme_tcp 00:15:56.996 rmmod nvme_fabrics 00:15:56.996 rmmod nvme_keyring 00:15:56.996 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:56.996 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:15:56.996 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:15:56.996 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@513 -- # '[' -n 85699 ']' 00:15:56.996 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # killprocess 85699 00:15:56.996 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 85699 ']' 00:15:56.996 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 85699 00:15:56.996 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:15:57.256 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:57.256 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85699 00:15:57.256 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:57.256 killing process with pid 85699 00:15:57.256 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:57.256 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85699' 00:15:57.256 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 85699 00:15:57.256 18:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 85699 00:15:57.256 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:57.256 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:57.256 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:57.256 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:15:57.256 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:57.256 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-save 00:15:57.256 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-restore 00:15:57.256 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:57.256 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:57.256 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:57.518 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:57.518 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:57.518 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:57.518 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:57.518 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:57.518 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:57.518 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:57.518 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:57.518 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:57.518 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:57.518 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:57.518 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:57.518 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:57.518 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.518 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.518 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.518 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:15:57.518 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:15:57.518 ************************************ 00:15:57.518 END TEST nvmf_fuzz 00:15:57.518 ************************************ 00:15:57.518 00:15:57.518 real 0m2.374s 00:15:57.518 user 0m1.957s 00:15:57.518 sys 0m0.792s 00:15:57.518 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:57.518 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:57.777 18:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:15:57.777 18:33:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:57.777 18:33:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:57.777 18:33:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:57.777 ************************************ 00:15:57.777 START TEST nvmf_multiconnection 00:15:57.777 ************************************ 00:15:57.777 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:15:57.777 * Looking for test storage... 00:15:57.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:57.777 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:57.777 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lcov --version 00:15:57.777 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:57.777 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:57.777 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:57.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.778 --rc genhtml_branch_coverage=1 00:15:57.778 --rc genhtml_function_coverage=1 00:15:57.778 --rc genhtml_legend=1 00:15:57.778 --rc geninfo_all_blocks=1 00:15:57.778 --rc geninfo_unexecuted_blocks=1 00:15:57.778 00:15:57.778 ' 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:57.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.778 --rc genhtml_branch_coverage=1 00:15:57.778 --rc genhtml_function_coverage=1 00:15:57.778 --rc genhtml_legend=1 00:15:57.778 --rc geninfo_all_blocks=1 00:15:57.778 --rc geninfo_unexecuted_blocks=1 00:15:57.778 00:15:57.778 ' 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:57.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.778 --rc genhtml_branch_coverage=1 00:15:57.778 --rc genhtml_function_coverage=1 00:15:57.778 --rc genhtml_legend=1 00:15:57.778 --rc geninfo_all_blocks=1 00:15:57.778 --rc geninfo_unexecuted_blocks=1 00:15:57.778 00:15:57.778 ' 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:57.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.778 --rc genhtml_branch_coverage=1 00:15:57.778 --rc genhtml_function_coverage=1 00:15:57.778 --rc genhtml_legend=1 00:15:57.778 --rc geninfo_all_blocks=1 00:15:57.778 --rc geninfo_unexecuted_blocks=1 00:15:57.778 00:15:57.778 ' 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:57.778 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:57.778 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:57.779 Cannot find device "nvmf_init_br" 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:15:57.779 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:58.037 Cannot find device "nvmf_init_br2" 00:15:58.037 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:15:58.037 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:58.037 Cannot find device "nvmf_tgt_br" 00:15:58.037 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:58.038 Cannot find device "nvmf_tgt_br2" 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:58.038 Cannot find device "nvmf_init_br" 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:58.038 Cannot find device "nvmf_init_br2" 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:58.038 Cannot find device "nvmf_tgt_br" 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:58.038 Cannot find device "nvmf_tgt_br2" 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:58.038 Cannot find device "nvmf_br" 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:58.038 Cannot find device "nvmf_init_if" 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:58.038 Cannot find device "nvmf_init_if2" 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:58.038 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:58.038 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:58.038 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:58.296 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:58.296 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:58.296 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:58.296 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:58.296 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:58.296 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:58.296 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:15:58.296 00:15:58.296 --- 10.0.0.3 ping statistics --- 00:15:58.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.296 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:58.296 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:58.296 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:15:58.296 00:15:58.296 --- 10.0.0.4 ping statistics --- 00:15:58.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.296 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:58.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:58.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:58.296 00:15:58.296 --- 10.0.0.1 ping statistics --- 00:15:58.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.296 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:58.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:58.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:15:58.296 00:15:58.296 --- 10.0.0.2 ping statistics --- 00:15:58.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.296 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@457 -- # return 0 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # nvmfpid=85939 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # waitforlisten 85939 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 85939 ']' 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:58.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:58.296 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.296 [2024-12-08 18:33:16.146363] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:58.296 [2024-12-08 18:33:16.146489] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.555 [2024-12-08 18:33:16.285839] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:58.555 [2024-12-08 18:33:16.349642] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.555 [2024-12-08 18:33:16.349738] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.555 [2024-12-08 18:33:16.349767] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:58.555 [2024-12-08 18:33:16.349776] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:58.555 [2024-12-08 18:33:16.349783] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.555 [2024-12-08 18:33:16.350174] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.555 [2024-12-08 18:33:16.350554] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.555 [2024-12-08 18:33:16.350469] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:58.555 [2024-12-08 18:33:16.350548] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:15:58.555 [2024-12-08 18:33:16.404570] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:58.555 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:58.555 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:15:58.555 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:58.555 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:58.555 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.814 [2024-12-08 18:33:16.515111] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.814 Malloc1 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.814 [2024-12-08 18:33:16.576327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.814 Malloc2 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.814 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.815 Malloc3 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.815 Malloc4 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.815 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.074 Malloc5 00:15:59.074 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.074 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:15:59.074 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.074 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.074 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.074 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:15:59.074 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.074 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.074 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.074 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:15:59.074 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.074 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.074 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.074 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:59.074 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:15:59.074 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.074 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.074 Malloc6 00:15:59.074 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.075 Malloc7 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.075 Malloc8 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.075 Malloc9 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.075 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.335 Malloc10 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.335 Malloc11 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:59.335 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:01.877 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:01.877 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:16:01.877 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:01.877 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:01.877 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:01.877 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:01.877 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:01.877 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:16:01.877 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:16:01.877 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:01.877 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:01.877 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:01.877 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:03.780 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:03.780 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:03.780 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:16:03.780 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:03.780 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:03.780 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:03.780 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:03.780 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:16:03.780 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:16:03.780 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:03.780 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:03.780 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:03.780 18:33:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:05.686 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:05.686 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:05.686 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:16:05.686 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:05.686 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:05.686 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:05.686 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:05.686 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:16:05.944 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:16:05.944 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:05.945 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:05.945 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:05.945 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:07.856 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:07.856 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:07.856 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:16:07.856 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:07.856 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:07.856 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:07.856 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:07.856 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:16:08.115 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:16:08.115 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:08.115 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:08.115 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:08.115 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:10.019 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:10.019 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:10.019 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:16:10.019 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:10.019 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:10.019 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:10.019 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:10.019 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:16:10.278 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:16:10.278 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:10.278 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:10.278 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:10.278 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:12.184 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:12.184 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:12.184 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:16:12.184 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:12.184 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:12.184 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:12.184 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:12.184 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:16:12.443 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:16:12.443 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:12.443 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:12.443 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:12.443 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:14.348 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:14.348 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:14.348 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:16:14.348 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:14.348 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:14.348 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:14.348 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:14.348 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:16:14.607 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:16:14.607 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:14.607 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:14.607 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:14.607 18:33:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:16.509 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:16.509 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:16.509 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:16:16.509 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:16.509 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:16.509 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:16.509 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:16.509 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:16:16.768 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:16:16.768 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:16.768 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:16.768 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:16.768 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:18.668 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:18.668 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:16:18.668 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:18.668 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:18.668 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:18.668 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:18.668 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:18.668 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:16:18.927 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:16:18.927 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:18.927 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:18.927 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:18.927 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:20.829 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:20.829 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:20.829 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:16:21.086 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:21.086 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:21.086 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:21.086 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:21.086 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:16:21.086 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:16:21.086 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:21.086 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:21.086 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:21.086 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:22.986 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:23.244 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:23.244 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:16:23.244 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:23.244 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:23.244 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:23.244 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:16:23.244 [global] 00:16:23.244 thread=1 00:16:23.244 invalidate=1 00:16:23.244 rw=read 00:16:23.244 time_based=1 00:16:23.244 runtime=10 00:16:23.244 ioengine=libaio 00:16:23.244 direct=1 00:16:23.244 bs=262144 00:16:23.244 iodepth=64 00:16:23.244 norandommap=1 00:16:23.244 numjobs=1 00:16:23.244 00:16:23.244 [job0] 00:16:23.244 filename=/dev/nvme0n1 00:16:23.244 [job1] 00:16:23.244 filename=/dev/nvme10n1 00:16:23.244 [job2] 00:16:23.244 filename=/dev/nvme1n1 00:16:23.244 [job3] 00:16:23.244 filename=/dev/nvme2n1 00:16:23.244 [job4] 00:16:23.244 filename=/dev/nvme3n1 00:16:23.244 [job5] 00:16:23.244 filename=/dev/nvme4n1 00:16:23.244 [job6] 00:16:23.244 filename=/dev/nvme5n1 00:16:23.244 [job7] 00:16:23.244 filename=/dev/nvme6n1 00:16:23.244 [job8] 00:16:23.244 filename=/dev/nvme7n1 00:16:23.244 [job9] 00:16:23.244 filename=/dev/nvme8n1 00:16:23.244 [job10] 00:16:23.244 filename=/dev/nvme9n1 00:16:23.244 Could not set queue depth (nvme0n1) 00:16:23.244 Could not set queue depth (nvme10n1) 00:16:23.244 Could not set queue depth (nvme1n1) 00:16:23.244 Could not set queue depth (nvme2n1) 00:16:23.244 Could not set queue depth (nvme3n1) 00:16:23.244 Could not set queue depth (nvme4n1) 00:16:23.244 Could not set queue depth (nvme5n1) 00:16:23.244 Could not set queue depth (nvme6n1) 00:16:23.244 Could not set queue depth (nvme7n1) 00:16:23.244 Could not set queue depth (nvme8n1) 00:16:23.244 Could not set queue depth (nvme9n1) 00:16:23.502 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.502 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.502 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.502 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.502 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.502 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.502 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.502 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.502 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.502 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.502 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.502 fio-3.35 00:16:23.502 Starting 11 threads 00:16:35.733 00:16:35.733 job0: (groupid=0, jobs=1): err= 0: pid=86388: Sun Dec 8 18:33:51 2024 00:16:35.734 read: IOPS=424, BW=106MiB/s (111MB/s)(1067MiB/10054msec) 00:16:35.734 slat (usec): min=21, max=45258, avg=2339.08, stdev=5341.97 00:16:35.734 clat (msec): min=33, max=197, avg=148.35, stdev=17.84 00:16:35.734 lat (msec): min=33, max=219, avg=150.69, stdev=17.98 00:16:35.734 clat percentiles (msec): 00:16:35.734 | 1.00th=[ 70], 5.00th=[ 117], 10.00th=[ 132], 20.00th=[ 140], 00:16:35.734 | 30.00th=[ 144], 40.00th=[ 148], 50.00th=[ 153], 60.00th=[ 155], 00:16:35.734 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 165], 95.00th=[ 169], 00:16:35.734 | 99.00th=[ 178], 99.50th=[ 182], 99.90th=[ 199], 99.95th=[ 199], 00:16:35.734 | 99.99th=[ 199] 00:16:35.734 bw ( KiB/s): min=101888, max=116224, per=16.74%, avg=107607.00, stdev=3293.52, samples=20 00:16:35.734 iops : min= 398, max= 454, avg=420.20, stdev=12.93, samples=20 00:16:35.734 lat (msec) : 50=0.61%, 100=1.10%, 250=98.29% 00:16:35.734 cpu : usr=0.20%, sys=2.02%, ctx=875, majf=0, minf=4097 00:16:35.734 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:35.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.734 issued rwts: total=4266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.734 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.734 job1: (groupid=0, jobs=1): err= 0: pid=86390: Sun Dec 8 18:33:51 2024 00:16:35.734 read: IOPS=98, BW=24.7MiB/s (25.9MB/s)(250MiB/10131msec) 00:16:35.734 slat (usec): min=26, max=405889, avg=9426.37, stdev=28249.08 00:16:35.734 clat (msec): min=40, max=1209, avg=637.46, stdev=179.38 00:16:35.734 lat (msec): min=40, max=1209, avg=646.88, stdev=182.01 00:16:35.734 clat percentiles (msec): 00:16:35.734 | 1.00th=[ 67], 5.00th=[ 241], 10.00th=[ 456], 20.00th=[ 567], 00:16:35.734 | 30.00th=[ 600], 40.00th=[ 617], 50.00th=[ 651], 60.00th=[ 667], 00:16:35.734 | 70.00th=[ 693], 80.00th=[ 735], 90.00th=[ 869], 95.00th=[ 961], 00:16:35.734 | 99.00th=[ 1020], 99.50th=[ 1028], 99.90th=[ 1083], 99.95th=[ 1217], 00:16:35.734 | 99.99th=[ 1217] 00:16:35.734 bw ( KiB/s): min= 8192, max=40448, per=3.73%, avg=23984.45, stdev=7169.19, samples=20 00:16:35.734 iops : min= 32, max= 158, avg=93.65, stdev=27.99, samples=20 00:16:35.734 lat (msec) : 50=1.00%, 100=0.80%, 250=3.90%, 500=7.39%, 750=70.53% 00:16:35.734 lat (msec) : 1000=14.19%, 2000=2.20% 00:16:35.734 cpu : usr=0.06%, sys=0.59%, ctx=231, majf=0, minf=4097 00:16:35.734 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:16:35.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.734 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.734 issued rwts: total=1001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.734 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.734 job2: (groupid=0, jobs=1): err= 0: pid=86394: Sun Dec 8 18:33:51 2024 00:16:35.734 read: IOPS=169, BW=42.5MiB/s (44.6MB/s)(431MiB/10130msec) 00:16:35.734 slat (usec): min=21, max=170016, avg=5662.79, stdev=16149.37 00:16:35.734 clat (msec): min=21, max=794, avg=370.08, stdev=231.15 00:16:35.734 lat (msec): min=21, max=794, avg=375.74, stdev=234.81 00:16:35.734 clat percentiles (msec): 00:16:35.734 | 1.00th=[ 46], 5.00th=[ 144], 10.00th=[ 155], 20.00th=[ 161], 00:16:35.734 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 230], 60.00th=[ 477], 00:16:35.734 | 70.00th=[ 609], 80.00th=[ 642], 90.00th=[ 676], 95.00th=[ 701], 00:16:35.734 | 99.00th=[ 743], 99.50th=[ 751], 99.90th=[ 793], 99.95th=[ 793], 00:16:35.734 | 99.99th=[ 793] 00:16:35.734 bw ( KiB/s): min=20992, max=100352, per=6.60%, avg=42455.85, stdev=29823.47, samples=20 00:16:35.734 iops : min= 82, max= 392, avg=165.75, stdev=116.46, samples=20 00:16:35.734 lat (msec) : 50=1.05%, 250=51.80%, 500=7.90%, 750=38.68%, 1000=0.58% 00:16:35.734 cpu : usr=0.07%, sys=0.83%, ctx=362, majf=0, minf=4097 00:16:35.734 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:16:35.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.734 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.734 issued rwts: total=1722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.734 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.734 job3: (groupid=0, jobs=1): err= 0: pid=86395: Sun Dec 8 18:33:51 2024 00:16:35.734 read: IOPS=168, BW=42.2MiB/s (44.3MB/s)(428MiB/10135msec) 00:16:35.734 slat (usec): min=22, max=109324, avg=5856.96, stdev=16328.26 00:16:35.734 clat (msec): min=19, max=784, avg=372.68, stdev=229.53 00:16:35.734 lat (msec): min=19, max=785, avg=378.53, stdev=233.05 00:16:35.734 clat percentiles (msec): 00:16:35.734 | 1.00th=[ 65], 5.00th=[ 142], 10.00th=[ 155], 20.00th=[ 163], 00:16:35.734 | 30.00th=[ 171], 40.00th=[ 182], 50.00th=[ 230], 60.00th=[ 502], 00:16:35.734 | 70.00th=[ 617], 80.00th=[ 642], 90.00th=[ 684], 95.00th=[ 701], 00:16:35.734 | 99.00th=[ 726], 99.50th=[ 726], 99.90th=[ 785], 99.95th=[ 785], 00:16:35.734 | 99.99th=[ 785] 00:16:35.734 bw ( KiB/s): min=21504, max=100352, per=6.56%, avg=42174.15, stdev=28901.18, samples=20 00:16:35.734 iops : min= 84, max= 392, avg=164.65, stdev=112.84, samples=20 00:16:35.734 lat (msec) : 20=0.23%, 50=0.18%, 100=0.88%, 250=50.91%, 500=8.01% 00:16:35.734 lat (msec) : 750=39.57%, 1000=0.23% 00:16:35.734 cpu : usr=0.09%, sys=0.81%, ctx=326, majf=0, minf=4097 00:16:35.734 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:16:35.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.734 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.734 issued rwts: total=1711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.734 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.734 job4: (groupid=0, jobs=1): err= 0: pid=86396: Sun Dec 8 18:33:51 2024 00:16:35.734 read: IOPS=140, BW=35.1MiB/s (36.8MB/s)(355MiB/10114msec) 00:16:35.734 slat (usec): min=21, max=321900, avg=7039.31, stdev=21352.41 00:16:35.734 clat (msec): min=24, max=1095, avg=448.06, stdev=173.58 00:16:35.734 lat (msec): min=25, max=1172, avg=455.10, stdev=176.27 00:16:35.734 clat percentiles (msec): 00:16:35.734 | 1.00th=[ 66], 5.00th=[ 271], 10.00th=[ 330], 20.00th=[ 359], 00:16:35.734 | 30.00th=[ 380], 40.00th=[ 393], 50.00th=[ 405], 60.00th=[ 418], 00:16:35.734 | 70.00th=[ 430], 80.00th=[ 456], 90.00th=[ 827], 95.00th=[ 877], 00:16:35.734 | 99.00th=[ 927], 99.50th=[ 936], 99.90th=[ 995], 99.95th=[ 1099], 00:16:35.734 | 99.99th=[ 1099] 00:16:35.734 bw ( KiB/s): min=11264, max=45568, per=5.40%, avg=34745.30, stdev=10686.73, samples=20 00:16:35.734 iops : min= 44, max= 178, avg=135.60, stdev=41.68, samples=20 00:16:35.734 lat (msec) : 50=0.56%, 100=0.49%, 250=3.17%, 500=79.93%, 750=3.59% 00:16:35.734 lat (msec) : 1000=12.18%, 2000=0.07% 00:16:35.734 cpu : usr=0.06%, sys=0.70%, ctx=295, majf=0, minf=4097 00:16:35.734 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.6% 00:16:35.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.734 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.734 issued rwts: total=1420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.734 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.734 job5: (groupid=0, jobs=1): err= 0: pid=86402: Sun Dec 8 18:33:51 2024 00:16:35.734 read: IOPS=359, BW=90.0MiB/s (94.3MB/s)(906MiB/10066msec) 00:16:35.734 slat (usec): min=22, max=54734, avg=2757.69, stdev=6298.55 00:16:35.734 clat (msec): min=25, max=341, avg=174.75, stdev=21.01 00:16:35.734 lat (msec): min=25, max=341, avg=177.51, stdev=21.27 00:16:35.734 clat percentiles (msec): 00:16:35.734 | 1.00th=[ 110], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 165], 00:16:35.734 | 30.00th=[ 169], 40.00th=[ 171], 50.00th=[ 176], 60.00th=[ 178], 00:16:35.734 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 197], 00:16:35.734 | 99.00th=[ 268], 99.50th=[ 288], 99.90th=[ 321], 99.95th=[ 342], 00:16:35.734 | 99.99th=[ 342] 00:16:35.734 bw ( KiB/s): min=61828, max=97280, per=14.17%, avg=91113.45, stdev=7293.92, samples=20 00:16:35.734 iops : min= 241, max= 380, avg=355.80, stdev=28.56, samples=20 00:16:35.734 lat (msec) : 50=0.11%, 100=0.72%, 250=97.85%, 500=1.33% 00:16:35.734 cpu : usr=0.18%, sys=1.62%, ctx=796, majf=0, minf=4097 00:16:35.734 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:16:35.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.734 issued rwts: total=3622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.734 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.734 job6: (groupid=0, jobs=1): err= 0: pid=86404: Sun Dec 8 18:33:51 2024 00:16:35.734 read: IOPS=143, BW=35.9MiB/s (37.7MB/s)(364MiB/10116msec) 00:16:35.734 slat (usec): min=23, max=393177, avg=6823.39, stdev=22836.65 00:16:35.734 clat (msec): min=35, max=1140, avg=437.96, stdev=183.99 00:16:35.734 lat (msec): min=35, max=1265, avg=444.79, stdev=186.91 00:16:35.734 clat percentiles (msec): 00:16:35.734 | 1.00th=[ 91], 5.00th=[ 305], 10.00th=[ 334], 20.00th=[ 355], 00:16:35.734 | 30.00th=[ 368], 40.00th=[ 376], 50.00th=[ 393], 60.00th=[ 405], 00:16:35.734 | 70.00th=[ 418], 80.00th=[ 439], 90.00th=[ 684], 95.00th=[ 919], 00:16:35.734 | 99.00th=[ 1036], 99.50th=[ 1062], 99.90th=[ 1070], 99.95th=[ 1133], 00:16:35.734 | 99.99th=[ 1133] 00:16:35.734 bw ( KiB/s): min= 7168, max=46592, per=5.54%, avg=35595.50, stdev=11753.26, samples=20 00:16:35.734 iops : min= 28, max= 182, avg=138.95, stdev=45.85, samples=20 00:16:35.734 lat (msec) : 50=0.07%, 100=1.99%, 250=2.54%, 500=79.64%, 750=6.53% 00:16:35.734 lat (msec) : 1000=6.60%, 2000=2.61% 00:16:35.734 cpu : usr=0.11%, sys=0.69%, ctx=284, majf=0, minf=4098 00:16:35.734 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.7% 00:16:35.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.734 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.734 issued rwts: total=1454,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.734 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.734 job7: (groupid=0, jobs=1): err= 0: pid=86405: Sun Dec 8 18:33:51 2024 00:16:35.734 read: IOPS=96, BW=24.2MiB/s (25.4MB/s)(246MiB/10140msec) 00:16:35.734 slat (usec): min=22, max=459868, avg=9375.51, stdev=29843.61 00:16:35.734 clat (msec): min=18, max=1351, avg=649.95, stdev=210.15 00:16:35.734 lat (msec): min=18, max=1351, avg=659.32, stdev=213.25 00:16:35.734 clat percentiles (msec): 00:16:35.734 | 1.00th=[ 106], 5.00th=[ 194], 10.00th=[ 313], 20.00th=[ 550], 00:16:35.734 | 30.00th=[ 609], 40.00th=[ 642], 50.00th=[ 684], 60.00th=[ 718], 00:16:35.734 | 70.00th=[ 743], 80.00th=[ 793], 90.00th=[ 869], 95.00th=[ 969], 00:16:35.734 | 99.00th=[ 1053], 99.50th=[ 1053], 99.90th=[ 1351], 99.95th=[ 1351], 00:16:35.734 | 99.99th=[ 1351] 00:16:35.734 bw ( KiB/s): min= 8704, max=38912, per=3.66%, avg=23532.55, stdev=7340.55, samples=20 00:16:35.734 iops : min= 34, max= 152, avg=91.80, stdev=28.64, samples=20 00:16:35.734 lat (msec) : 20=0.10%, 250=7.93%, 500=7.73%, 750=55.75%, 1000=24.11% 00:16:35.734 lat (msec) : 2000=4.37% 00:16:35.734 cpu : usr=0.03%, sys=0.55%, ctx=229, majf=0, minf=4097 00:16:35.734 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.6% 00:16:35.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.734 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.734 issued rwts: total=983,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.734 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.734 job8: (groupid=0, jobs=1): err= 0: pid=86406: Sun Dec 8 18:33:51 2024 00:16:35.734 read: IOPS=423, BW=106MiB/s (111MB/s)(1066MiB/10054msec) 00:16:35.734 slat (usec): min=25, max=62802, avg=2342.38, stdev=5423.98 00:16:35.734 clat (msec): min=42, max=206, avg=148.50, stdev=16.31 00:16:35.734 lat (msec): min=69, max=206, avg=150.84, stdev=16.40 00:16:35.734 clat percentiles (msec): 00:16:35.734 | 1.00th=[ 89], 5.00th=[ 115], 10.00th=[ 131], 20.00th=[ 140], 00:16:35.734 | 30.00th=[ 144], 40.00th=[ 148], 50.00th=[ 150], 60.00th=[ 155], 00:16:35.734 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 165], 95.00th=[ 169], 00:16:35.734 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 188], 99.95th=[ 207], 00:16:35.734 | 99.99th=[ 207] 00:16:35.734 bw ( KiB/s): min=102092, max=120320, per=16.72%, avg=107479.10, stdev=4503.16, samples=20 00:16:35.734 iops : min= 398, max= 470, avg=419.70, stdev=17.64, samples=20 00:16:35.734 lat (msec) : 50=0.02%, 100=1.97%, 250=98.01% 00:16:35.734 cpu : usr=0.24%, sys=2.30%, ctx=858, majf=0, minf=4097 00:16:35.734 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:35.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.734 issued rwts: total=4262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.734 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.734 job9: (groupid=0, jobs=1): err= 0: pid=86407: Sun Dec 8 18:33:51 2024 00:16:35.734 read: IOPS=139, BW=34.9MiB/s (36.6MB/s)(353MiB/10109msec) 00:16:35.734 slat (usec): min=22, max=260672, avg=6881.69, stdev=20146.46 00:16:35.734 clat (msec): min=22, max=1110, avg=451.24, stdev=181.74 00:16:35.734 lat (msec): min=22, max=1110, avg=458.12, stdev=183.68 00:16:35.734 clat percentiles (msec): 00:16:35.734 | 1.00th=[ 26], 5.00th=[ 284], 10.00th=[ 317], 20.00th=[ 355], 00:16:35.734 | 30.00th=[ 376], 40.00th=[ 393], 50.00th=[ 414], 60.00th=[ 426], 00:16:35.734 | 70.00th=[ 447], 80.00th=[ 477], 90.00th=[ 760], 95.00th=[ 902], 00:16:35.734 | 99.00th=[ 986], 99.50th=[ 1003], 99.90th=[ 1116], 99.95th=[ 1116], 00:16:35.734 | 99.99th=[ 1116] 00:16:35.734 bw ( KiB/s): min= 9728, max=45568, per=5.36%, avg=34456.95, stdev=10472.31, samples=20 00:16:35.734 iops : min= 38, max= 178, avg=134.55, stdev=40.89, samples=20 00:16:35.734 lat (msec) : 50=1.06%, 100=0.14%, 250=2.98%, 500=79.79%, 750=5.74% 00:16:35.734 lat (msec) : 1000=9.72%, 2000=0.57% 00:16:35.734 cpu : usr=0.07%, sys=0.79%, ctx=273, majf=0, minf=4097 00:16:35.734 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:16:35.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.734 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.734 issued rwts: total=1410,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.734 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.734 job10: (groupid=0, jobs=1): err= 0: pid=86408: Sun Dec 8 18:33:51 2024 00:16:35.734 read: IOPS=358, BW=89.7MiB/s (94.1MB/s)(904MiB/10076msec) 00:16:35.734 slat (usec): min=21, max=65981, avg=2762.59, stdev=6274.86 00:16:35.734 clat (msec): min=22, max=305, avg=175.29, stdev=20.54 00:16:35.734 lat (msec): min=22, max=307, avg=178.06, stdev=20.87 00:16:35.734 clat percentiles (msec): 00:16:35.734 | 1.00th=[ 111], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 165], 00:16:35.734 | 30.00th=[ 169], 40.00th=[ 171], 50.00th=[ 176], 60.00th=[ 178], 00:16:35.734 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 199], 00:16:35.734 | 99.00th=[ 259], 99.50th=[ 284], 99.90th=[ 305], 99.95th=[ 305], 00:16:35.734 | 99.99th=[ 305] 00:16:35.734 bw ( KiB/s): min=59511, max=97792, per=14.15%, avg=90972.80, stdev=7813.18, samples=20 00:16:35.734 iops : min= 232, max= 382, avg=355.20, stdev=30.58, samples=20 00:16:35.734 lat (msec) : 50=0.03%, 100=0.75%, 250=97.71%, 500=1.52% 00:16:35.734 cpu : usr=0.23%, sys=1.68%, ctx=766, majf=0, minf=4097 00:16:35.734 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:16:35.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.734 issued rwts: total=3617,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.734 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.734 00:16:35.734 Run status group 0 (all jobs): 00:16:35.734 READ: bw=628MiB/s (658MB/s), 24.2MiB/s-106MiB/s (25.4MB/s-111MB/s), io=6367MiB (6676MB), run=10054-10140msec 00:16:35.734 00:16:35.734 Disk stats (read/write): 00:16:35.734 nvme0n1: ios=8422/0, merge=0/0, ticks=1236596/0, in_queue=1236596, util=97.89% 00:16:35.734 nvme10n1: ios=1877/0, merge=0/0, ticks=1204914/0, in_queue=1204914, util=97.89% 00:16:35.734 nvme1n1: ios=3317/0, merge=0/0, ticks=1202527/0, in_queue=1202527, util=98.07% 00:16:35.734 nvme2n1: ios=3295/0, merge=0/0, ticks=1211610/0, in_queue=1211610, util=98.21% 00:16:35.734 nvme3n1: ios=2722/0, merge=0/0, ticks=1222183/0, in_queue=1222183, util=98.28% 00:16:35.734 nvme4n1: ios=7117/0, merge=0/0, ticks=1232548/0, in_queue=1232548, util=98.45% 00:16:35.734 nvme5n1: ios=2787/0, merge=0/0, ticks=1227142/0, in_queue=1227142, util=98.68% 00:16:35.734 nvme6n1: ios=1846/0, merge=0/0, ticks=1206969/0, in_queue=1206969, util=98.73% 00:16:35.734 nvme7n1: ios=8400/0, merge=0/0, ticks=1234103/0, in_queue=1234103, util=99.06% 00:16:35.734 nvme8n1: ios=2692/0, merge=0/0, ticks=1225517/0, in_queue=1225517, util=98.98% 00:16:35.734 nvme9n1: ios=7110/0, merge=0/0, ticks=1235520/0, in_queue=1235520, util=99.20% 00:16:35.734 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:16:35.734 [global] 00:16:35.734 thread=1 00:16:35.734 invalidate=1 00:16:35.734 rw=randwrite 00:16:35.734 time_based=1 00:16:35.734 runtime=10 00:16:35.734 ioengine=libaio 00:16:35.734 direct=1 00:16:35.734 bs=262144 00:16:35.734 iodepth=64 00:16:35.734 norandommap=1 00:16:35.734 numjobs=1 00:16:35.734 00:16:35.734 [job0] 00:16:35.734 filename=/dev/nvme0n1 00:16:35.734 [job1] 00:16:35.734 filename=/dev/nvme10n1 00:16:35.734 [job2] 00:16:35.734 filename=/dev/nvme1n1 00:16:35.734 [job3] 00:16:35.734 filename=/dev/nvme2n1 00:16:35.735 [job4] 00:16:35.735 filename=/dev/nvme3n1 00:16:35.735 [job5] 00:16:35.735 filename=/dev/nvme4n1 00:16:35.735 [job6] 00:16:35.735 filename=/dev/nvme5n1 00:16:35.735 [job7] 00:16:35.735 filename=/dev/nvme6n1 00:16:35.735 [job8] 00:16:35.735 filename=/dev/nvme7n1 00:16:35.735 [job9] 00:16:35.735 filename=/dev/nvme8n1 00:16:35.735 [job10] 00:16:35.735 filename=/dev/nvme9n1 00:16:35.735 Could not set queue depth (nvme0n1) 00:16:35.735 Could not set queue depth (nvme10n1) 00:16:35.735 Could not set queue depth (nvme1n1) 00:16:35.735 Could not set queue depth (nvme2n1) 00:16:35.735 Could not set queue depth (nvme3n1) 00:16:35.735 Could not set queue depth (nvme4n1) 00:16:35.735 Could not set queue depth (nvme5n1) 00:16:35.735 Could not set queue depth (nvme6n1) 00:16:35.735 Could not set queue depth (nvme7n1) 00:16:35.735 Could not set queue depth (nvme8n1) 00:16:35.735 Could not set queue depth (nvme9n1) 00:16:35.735 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.735 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.735 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.735 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.735 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.735 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.735 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.735 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.735 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.735 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.735 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.735 fio-3.35 00:16:35.735 Starting 11 threads 00:16:45.712 00:16:45.712 job0: (groupid=0, jobs=1): err= 0: pid=86604: Sun Dec 8 18:34:02 2024 00:16:45.712 write: IOPS=880, BW=220MiB/s (231MB/s)(2216MiB/10062msec); 0 zone resets 00:16:45.712 slat (usec): min=19, max=6412, avg=1112.17, stdev=1873.59 00:16:45.712 clat (msec): min=5, max=129, avg=71.51, stdev= 4.70 00:16:45.712 lat (msec): min=5, max=129, avg=72.62, stdev= 4.43 00:16:45.712 clat percentiles (msec): 00:16:45.712 | 1.00th=[ 67], 5.00th=[ 68], 10.00th=[ 68], 20.00th=[ 69], 00:16:45.712 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 73], 00:16:45.712 | 70.00th=[ 73], 80.00th=[ 73], 90.00th=[ 74], 95.00th=[ 74], 00:16:45.712 | 99.00th=[ 83], 99.50th=[ 91], 99.90th=[ 122], 99.95th=[ 126], 00:16:45.712 | 99.99th=[ 130] 00:16:45.712 bw ( KiB/s): min=212905, max=228352, per=31.47%, avg=225304.20, stdev=3333.93, samples=20 00:16:45.712 iops : min= 831, max= 892, avg=880.05, stdev=13.15, samples=20 00:16:45.712 lat (msec) : 10=0.05%, 20=0.09%, 50=0.24%, 100=99.32%, 250=0.30% 00:16:45.712 cpu : usr=1.55%, sys=2.38%, ctx=11173, majf=0, minf=1 00:16:45.712 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:45.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.712 issued rwts: total=0,8864,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.712 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.712 job1: (groupid=0, jobs=1): err= 0: pid=86605: Sun Dec 8 18:34:02 2024 00:16:45.712 write: IOPS=240, BW=60.1MiB/s (63.0MB/s)(612MiB/10177msec); 0 zone resets 00:16:45.712 slat (usec): min=19, max=66085, avg=4067.31, stdev=7662.76 00:16:45.712 clat (msec): min=7, max=438, avg=262.10, stdev=81.75 00:16:45.712 lat (msec): min=7, max=438, avg=266.17, stdev=82.71 00:16:45.712 clat percentiles (msec): 00:16:45.712 | 1.00th=[ 42], 5.00th=[ 68], 10.00th=[ 84], 20.00th=[ 251], 00:16:45.712 | 30.00th=[ 264], 40.00th=[ 266], 50.00th=[ 268], 60.00th=[ 271], 00:16:45.712 | 70.00th=[ 275], 80.00th=[ 334], 90.00th=[ 363], 95.00th=[ 372], 00:16:45.712 | 99.00th=[ 384], 99.50th=[ 393], 99.90th=[ 422], 99.95th=[ 439], 00:16:45.712 | 99.99th=[ 439] 00:16:45.712 bw ( KiB/s): min=45056, max=154624, per=8.52%, avg=61004.80, stdev=23155.75, samples=20 00:16:45.712 iops : min= 176, max= 604, avg=238.30, stdev=90.45, samples=20 00:16:45.712 lat (msec) : 10=0.12%, 20=0.33%, 50=0.82%, 100=9.98%, 250=6.58% 00:16:45.712 lat (msec) : 500=82.17% 00:16:45.712 cpu : usr=0.44%, sys=0.67%, ctx=3140, majf=0, minf=1 00:16:45.712 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:16:45.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.712 issued rwts: total=0,2446,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.712 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.712 job2: (groupid=0, jobs=1): err= 0: pid=86617: Sun Dec 8 18:34:02 2024 00:16:45.712 write: IOPS=175, BW=43.9MiB/s (46.1MB/s)(451MiB/10260msec); 0 zone resets 00:16:45.712 slat (usec): min=22, max=64356, avg=5547.36, stdev=10051.07 00:16:45.712 clat (msec): min=11, max=666, avg=358.45, stdev=77.77 00:16:45.712 lat (msec): min=11, max=666, avg=364.00, stdev=78.39 00:16:45.712 clat percentiles (msec): 00:16:45.712 | 1.00th=[ 105], 5.00th=[ 268], 10.00th=[ 275], 20.00th=[ 288], 00:16:45.712 | 30.00th=[ 292], 40.00th=[ 317], 50.00th=[ 388], 60.00th=[ 409], 00:16:45.712 | 70.00th=[ 414], 80.00th=[ 418], 90.00th=[ 430], 95.00th=[ 443], 00:16:45.712 | 99.00th=[ 535], 99.50th=[ 617], 99.90th=[ 667], 99.95th=[ 667], 00:16:45.712 | 99.99th=[ 667] 00:16:45.712 bw ( KiB/s): min=36864, max=57344, per=6.22%, avg=44538.70, stdev=7945.27, samples=20 00:16:45.712 iops : min= 144, max= 224, avg=173.95, stdev=31.01, samples=20 00:16:45.712 lat (msec) : 20=0.17%, 50=0.22%, 100=0.44%, 250=1.83%, 500=95.90% 00:16:45.712 lat (msec) : 750=1.44% 00:16:45.712 cpu : usr=0.43%, sys=0.47%, ctx=1614, majf=0, minf=1 00:16:45.712 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:16:45.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.712 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.712 issued rwts: total=0,1803,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.712 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.712 job3: (groupid=0, jobs=1): err= 0: pid=86618: Sun Dec 8 18:34:02 2024 00:16:45.712 write: IOPS=174, BW=43.7MiB/s (45.8MB/s)(448MiB/10257msec); 0 zone resets 00:16:45.712 slat (usec): min=25, max=65049, avg=5542.32, stdev=10137.45 00:16:45.712 clat (msec): min=22, max=657, avg=360.79, stdev=76.39 00:16:45.712 lat (msec): min=22, max=657, avg=366.33, stdev=77.04 00:16:45.712 clat percentiles (msec): 00:16:45.712 | 1.00th=[ 117], 5.00th=[ 271], 10.00th=[ 275], 20.00th=[ 288], 00:16:45.712 | 30.00th=[ 292], 40.00th=[ 338], 50.00th=[ 393], 60.00th=[ 409], 00:16:45.712 | 70.00th=[ 414], 80.00th=[ 418], 90.00th=[ 435], 95.00th=[ 447], 00:16:45.712 | 99.00th=[ 550], 99.50th=[ 609], 99.90th=[ 659], 99.95th=[ 659], 00:16:45.712 | 99.99th=[ 659] 00:16:45.712 bw ( KiB/s): min=36864, max=57344, per=6.17%, avg=44200.60, stdev=7812.87, samples=20 00:16:45.712 iops : min= 144, max= 224, avg=172.60, stdev=30.46, samples=20 00:16:45.712 lat (msec) : 50=0.22%, 100=0.67%, 250=1.84%, 500=95.81%, 750=1.45% 00:16:45.712 cpu : usr=0.38%, sys=0.62%, ctx=1309, majf=0, minf=1 00:16:45.712 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:16:45.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.712 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.712 issued rwts: total=0,1791,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.712 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.712 job4: (groupid=0, jobs=1): err= 0: pid=86619: Sun Dec 8 18:34:02 2024 00:16:45.712 write: IOPS=165, BW=41.5MiB/s (43.5MB/s)(425MiB/10238msec); 0 zone resets 00:16:45.713 slat (usec): min=20, max=103046, avg=5667.48, stdev=11038.63 00:16:45.713 clat (msec): min=72, max=651, avg=379.79, stdev=71.45 00:16:45.713 lat (msec): min=72, max=651, avg=385.46, stdev=72.22 00:16:45.713 clat percentiles (msec): 00:16:45.713 | 1.00th=[ 93], 5.00th=[ 199], 10.00th=[ 313], 20.00th=[ 372], 00:16:45.713 | 30.00th=[ 384], 40.00th=[ 393], 50.00th=[ 401], 60.00th=[ 409], 00:16:45.713 | 70.00th=[ 409], 80.00th=[ 414], 90.00th=[ 418], 95.00th=[ 418], 00:16:45.713 | 99.00th=[ 550], 99.50th=[ 600], 99.90th=[ 651], 99.95th=[ 651], 00:16:45.713 | 99.99th=[ 651] 00:16:45.713 bw ( KiB/s): min=37376, max=69120, per=5.85%, avg=41885.70, stdev=6628.63, samples=20 00:16:45.713 iops : min= 146, max= 270, avg=163.60, stdev=25.90, samples=20 00:16:45.713 lat (msec) : 100=1.18%, 250=5.89%, 500=91.64%, 750=1.29% 00:16:45.713 cpu : usr=0.30%, sys=0.54%, ctx=2275, majf=0, minf=1 00:16:45.713 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:16:45.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.713 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.713 issued rwts: total=0,1699,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.713 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.713 job5: (groupid=0, jobs=1): err= 0: pid=86620: Sun Dec 8 18:34:02 2024 00:16:45.713 write: IOPS=216, BW=54.2MiB/s (56.9MB/s)(553MiB/10197msec); 0 zone resets 00:16:45.713 slat (usec): min=23, max=90474, avg=4520.74, stdev=8481.98 00:16:45.713 clat (msec): min=11, max=451, avg=290.36, stdev=62.43 00:16:45.713 lat (msec): min=11, max=451, avg=294.88, stdev=62.87 00:16:45.713 clat percentiles (msec): 00:16:45.713 | 1.00th=[ 72], 5.00th=[ 249], 10.00th=[ 251], 20.00th=[ 262], 00:16:45.713 | 30.00th=[ 266], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:16:45.713 | 70.00th=[ 279], 80.00th=[ 351], 90.00th=[ 397], 95.00th=[ 405], 00:16:45.713 | 99.00th=[ 422], 99.50th=[ 430], 99.90th=[ 435], 99.95th=[ 451], 00:16:45.713 | 99.99th=[ 451] 00:16:45.713 bw ( KiB/s): min=40960, max=61440, per=7.69%, avg=55019.90, stdev=8881.49, samples=20 00:16:45.713 iops : min= 160, max= 240, avg=214.90, stdev=34.69, samples=20 00:16:45.713 lat (msec) : 20=0.50%, 50=0.23%, 100=0.54%, 250=7.28%, 500=91.46% 00:16:45.713 cpu : usr=0.40%, sys=0.77%, ctx=2391, majf=0, minf=1 00:16:45.713 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:16:45.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.713 issued rwts: total=0,2212,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.713 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.713 job6: (groupid=0, jobs=1): err= 0: pid=86621: Sun Dec 8 18:34:02 2024 00:16:45.713 write: IOPS=176, BW=44.1MiB/s (46.2MB/s)(452MiB/10251msec); 0 zone resets 00:16:45.713 slat (usec): min=19, max=55436, avg=5465.59, stdev=9922.93 00:16:45.713 clat (msec): min=33, max=660, avg=357.41, stdev=74.91 00:16:45.713 lat (msec): min=33, max=660, avg=362.88, stdev=75.54 00:16:45.713 clat percentiles (msec): 00:16:45.713 | 1.00th=[ 126], 5.00th=[ 268], 10.00th=[ 275], 20.00th=[ 288], 00:16:45.713 | 30.00th=[ 292], 40.00th=[ 321], 50.00th=[ 388], 60.00th=[ 409], 00:16:45.713 | 70.00th=[ 409], 80.00th=[ 418], 90.00th=[ 422], 95.00th=[ 435], 00:16:45.713 | 99.00th=[ 531], 99.50th=[ 609], 99.90th=[ 659], 99.95th=[ 659], 00:16:45.713 | 99.99th=[ 659] 00:16:45.713 bw ( KiB/s): min=36864, max=57344, per=6.24%, avg=44640.65, stdev=7881.36, samples=20 00:16:45.713 iops : min= 144, max= 224, avg=174.35, stdev=30.74, samples=20 00:16:45.713 lat (msec) : 50=0.17%, 100=0.55%, 250=1.88%, 500=95.96%, 750=1.44% 00:16:45.713 cpu : usr=0.36%, sys=0.58%, ctx=2112, majf=0, minf=1 00:16:45.713 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:16:45.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.713 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.713 issued rwts: total=0,1807,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.713 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.713 job7: (groupid=0, jobs=1): err= 0: pid=86622: Sun Dec 8 18:34:02 2024 00:16:45.713 write: IOPS=175, BW=43.8MiB/s (45.9MB/s)(449MiB/10248msec); 0 zone resets 00:16:45.713 slat (usec): min=20, max=44404, avg=5404.43, stdev=9845.96 00:16:45.713 clat (msec): min=38, max=658, avg=359.58, stdev=69.29 00:16:45.713 lat (msec): min=38, max=658, avg=364.99, stdev=69.83 00:16:45.713 clat percentiles (msec): 00:16:45.713 | 1.00th=[ 192], 5.00th=[ 271], 10.00th=[ 279], 20.00th=[ 288], 00:16:45.713 | 30.00th=[ 292], 40.00th=[ 347], 50.00th=[ 388], 60.00th=[ 409], 00:16:45.713 | 70.00th=[ 409], 80.00th=[ 414], 90.00th=[ 418], 95.00th=[ 422], 00:16:45.713 | 99.00th=[ 558], 99.50th=[ 609], 99.90th=[ 659], 99.95th=[ 659], 00:16:45.713 | 99.99th=[ 659] 00:16:45.713 bw ( KiB/s): min=37376, max=57344, per=6.20%, avg=44363.35, stdev=7488.39, samples=20 00:16:45.713 iops : min= 146, max= 224, avg=173.25, stdev=29.21, samples=20 00:16:45.713 lat (msec) : 50=0.17%, 250=1.61%, 500=96.77%, 750=1.45% 00:16:45.713 cpu : usr=0.36%, sys=0.68%, ctx=2397, majf=0, minf=1 00:16:45.713 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:16:45.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.713 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.713 issued rwts: total=0,1796,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.713 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.713 job8: (groupid=0, jobs=1): err= 0: pid=86623: Sun Dec 8 18:34:02 2024 00:16:45.713 write: IOPS=174, BW=43.7MiB/s (45.8MB/s)(448MiB/10249msec); 0 zone resets 00:16:45.713 slat (usec): min=19, max=183370, avg=5541.39, stdev=10666.73 00:16:45.713 clat (msec): min=184, max=648, avg=360.43, stdev=64.63 00:16:45.713 lat (msec): min=184, max=648, avg=365.97, stdev=64.90 00:16:45.713 clat percentiles (msec): 00:16:45.713 | 1.00th=[ 251], 5.00th=[ 271], 10.00th=[ 284], 20.00th=[ 288], 00:16:45.713 | 30.00th=[ 292], 40.00th=[ 368], 50.00th=[ 388], 60.00th=[ 405], 00:16:45.713 | 70.00th=[ 409], 80.00th=[ 414], 90.00th=[ 418], 95.00th=[ 418], 00:16:45.713 | 99.00th=[ 542], 99.50th=[ 600], 99.90th=[ 651], 99.95th=[ 651], 00:16:45.713 | 99.99th=[ 651] 00:16:45.713 bw ( KiB/s): min=36352, max=57344, per=6.17%, avg=44205.45, stdev=7575.16, samples=20 00:16:45.713 iops : min= 142, max= 224, avg=172.65, stdev=29.54, samples=20 00:16:45.713 lat (msec) : 250=0.95%, 500=97.82%, 750=1.23% 00:16:45.713 cpu : usr=0.40%, sys=0.44%, ctx=2352, majf=0, minf=1 00:16:45.713 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:16:45.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.713 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.713 issued rwts: total=0,1791,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.713 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.713 job9: (groupid=0, jobs=1): err= 0: pid=86624: Sun Dec 8 18:34:02 2024 00:16:45.713 write: IOPS=221, BW=55.4MiB/s (58.1MB/s)(564MiB/10178msec); 0 zone resets 00:16:45.713 slat (usec): min=21, max=53852, avg=4426.58, stdev=7931.13 00:16:45.713 clat (msec): min=26, max=446, avg=284.16, stdev=49.19 00:16:45.713 lat (msec): min=26, max=446, avg=288.59, stdev=49.39 00:16:45.713 clat percentiles (msec): 00:16:45.713 | 1.00th=[ 109], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 262], 00:16:45.713 | 30.00th=[ 266], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:16:45.713 | 70.00th=[ 279], 80.00th=[ 338], 90.00th=[ 359], 95.00th=[ 368], 00:16:45.713 | 99.00th=[ 393], 99.50th=[ 397], 99.90th=[ 430], 99.95th=[ 447], 00:16:45.713 | 99.99th=[ 447] 00:16:45.713 bw ( KiB/s): min=45056, max=61440, per=7.84%, avg=56140.80, stdev=6955.58, samples=20 00:16:45.713 iops : min= 176, max= 240, avg=219.30, stdev=27.17, samples=20 00:16:45.713 lat (msec) : 50=0.35%, 100=0.53%, 250=7.85%, 500=91.27% 00:16:45.713 cpu : usr=0.48%, sys=0.67%, ctx=2441, majf=0, minf=1 00:16:45.713 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:16:45.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.713 issued rwts: total=0,2256,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.713 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.713 job10: (groupid=0, jobs=1): err= 0: pid=86625: Sun Dec 8 18:34:02 2024 00:16:45.713 write: IOPS=218, BW=54.6MiB/s (57.3MB/s)(557MiB/10195msec); 0 zone resets 00:16:45.713 slat (usec): min=22, max=84012, avg=4483.28, stdev=8335.48 00:16:45.713 clat (msec): min=13, max=462, avg=288.25, stdev=61.77 00:16:45.713 lat (msec): min=13, max=462, avg=292.73, stdev=62.24 00:16:45.713 clat percentiles (msec): 00:16:45.713 | 1.00th=[ 66], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 257], 00:16:45.713 | 30.00th=[ 266], 40.00th=[ 268], 50.00th=[ 268], 60.00th=[ 271], 00:16:45.713 | 70.00th=[ 275], 80.00th=[ 355], 90.00th=[ 388], 95.00th=[ 401], 00:16:45.713 | 99.00th=[ 414], 99.50th=[ 426], 99.90th=[ 443], 99.95th=[ 464], 00:16:45.713 | 99.99th=[ 464] 00:16:45.713 bw ( KiB/s): min=40960, max=65405, per=7.75%, avg=55484.55, stdev=9164.14, samples=20 00:16:45.713 iops : min= 160, max= 255, avg=216.45, stdev=35.72, samples=20 00:16:45.713 lat (msec) : 20=0.36%, 50=0.54%, 100=0.54%, 250=8.84%, 500=89.72% 00:16:45.713 cpu : usr=0.46%, sys=0.68%, ctx=748, majf=0, minf=1 00:16:45.713 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:16:45.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.713 issued rwts: total=0,2228,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.713 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.713 00:16:45.713 Run status group 0 (all jobs): 00:16:45.713 WRITE: bw=699MiB/s (733MB/s), 41.5MiB/s-220MiB/s (43.5MB/s-231MB/s), io=7173MiB (7522MB), run=10062-10260msec 00:16:45.713 00:16:45.713 Disk stats (read/write): 00:16:45.713 nvme0n1: ios=49/17571, merge=0/0, ticks=54/1216797, in_queue=1216851, util=97.73% 00:16:45.713 nvme10n1: ios=49/4754, merge=0/0, ticks=47/1204031, in_queue=1204078, util=97.70% 00:16:45.713 nvme1n1: ios=36/3599, merge=0/0, ticks=36/1241256, in_queue=1241292, util=98.12% 00:16:45.714 nvme2n1: ios=13/3571, merge=0/0, ticks=21/1240415, in_queue=1240436, util=98.02% 00:16:45.714 nvme3n1: ios=13/3386, merge=0/0, ticks=30/1238756, in_queue=1238786, util=97.88% 00:16:45.714 nvme4n1: ios=0/4298, merge=0/0, ticks=0/1207077, in_queue=1207077, util=98.27% 00:16:45.714 nvme5n1: ios=0/3604, merge=0/0, ticks=0/1239839, in_queue=1239839, util=98.34% 00:16:45.714 nvme6n1: ios=0/3583, merge=0/0, ticks=0/1240258, in_queue=1240258, util=98.37% 00:16:45.714 nvme7n1: ios=0/3569, merge=0/0, ticks=0/1239158, in_queue=1239158, util=98.60% 00:16:45.714 nvme8n1: ios=0/4381, merge=0/0, ticks=0/1204932, in_queue=1204932, util=98.62% 00:16:45.714 nvme9n1: ios=0/4337, merge=0/0, ticks=0/1207963, in_queue=1207963, util=99.00% 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:45.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:16:45.714 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:16:45.714 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:16:45.714 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:16:45.714 18:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:16:45.714 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:16:45.714 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:16:45.714 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:16:45.714 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:16:45.715 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:16:45.715 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.715 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:16:45.975 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:16:45.975 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:45.975 rmmod nvme_tcp 00:16:45.975 rmmod nvme_fabrics 00:16:45.975 rmmod nvme_keyring 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@513 -- # '[' -n 85939 ']' 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # killprocess 85939 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 85939 ']' 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 85939 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85939 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:45.975 killing process with pid 85939 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85939' 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 85939 00:16:45.975 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 85939 00:16:46.546 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:46.546 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:46.546 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:46.546 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:16:46.546 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-save 00:16:46.546 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:46.546 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-restore 00:16:46.546 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:46.546 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:46.546 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:46.546 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:46.546 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:46.546 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:46.546 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:46.546 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:46.546 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:46.546 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:46.546 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:46.546 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:46.806 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:46.806 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:46.806 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:46.806 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:46.806 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.806 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:46.806 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.806 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:16:46.806 00:16:46.806 real 0m49.124s 00:16:46.806 user 2m49.290s 00:16:46.806 sys 0m25.036s 00:16:46.806 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:46.806 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:46.806 ************************************ 00:16:46.806 END TEST nvmf_multiconnection 00:16:46.806 ************************************ 00:16:46.806 18:34:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:16:46.806 18:34:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:46.806 18:34:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:46.806 18:34:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:46.806 ************************************ 00:16:46.806 START TEST nvmf_initiator_timeout 00:16:46.806 ************************************ 00:16:46.806 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:16:46.806 * Looking for test storage... 00:16:46.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:46.806 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:46.806 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:16:46.806 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:47.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.068 --rc genhtml_branch_coverage=1 00:16:47.068 --rc genhtml_function_coverage=1 00:16:47.068 --rc genhtml_legend=1 00:16:47.068 --rc geninfo_all_blocks=1 00:16:47.068 --rc geninfo_unexecuted_blocks=1 00:16:47.068 00:16:47.068 ' 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:47.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.068 --rc genhtml_branch_coverage=1 00:16:47.068 --rc genhtml_function_coverage=1 00:16:47.068 --rc genhtml_legend=1 00:16:47.068 --rc geninfo_all_blocks=1 00:16:47.068 --rc geninfo_unexecuted_blocks=1 00:16:47.068 00:16:47.068 ' 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:47.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.068 --rc genhtml_branch_coverage=1 00:16:47.068 --rc genhtml_function_coverage=1 00:16:47.068 --rc genhtml_legend=1 00:16:47.068 --rc geninfo_all_blocks=1 00:16:47.068 --rc geninfo_unexecuted_blocks=1 00:16:47.068 00:16:47.068 ' 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:47.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.068 --rc genhtml_branch_coverage=1 00:16:47.068 --rc genhtml_function_coverage=1 00:16:47.068 --rc genhtml_legend=1 00:16:47.068 --rc geninfo_all_blocks=1 00:16:47.068 --rc geninfo_unexecuted_blocks=1 00:16:47.068 00:16:47.068 ' 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:47.068 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:47.069 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:47.069 Cannot find device "nvmf_init_br" 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:47.069 Cannot find device "nvmf_init_br2" 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:47.069 Cannot find device "nvmf_tgt_br" 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:47.069 Cannot find device "nvmf_tgt_br2" 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:47.069 Cannot find device "nvmf_init_br" 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:47.069 Cannot find device "nvmf_init_br2" 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:16:47.069 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:47.070 Cannot find device "nvmf_tgt_br" 00:16:47.070 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:16:47.070 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:47.070 Cannot find device "nvmf_tgt_br2" 00:16:47.070 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:16:47.070 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:47.070 Cannot find device "nvmf_br" 00:16:47.070 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:16:47.070 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:47.070 Cannot find device "nvmf_init_if" 00:16:47.070 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:16:47.070 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:47.070 Cannot find device "nvmf_init_if2" 00:16:47.070 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:16:47.070 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:47.070 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:47.070 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:16:47.070 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:47.070 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:47.330 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:16:47.330 18:34:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:47.330 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:47.330 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:47.330 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:47.330 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:47.330 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:47.330 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:47.330 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:47.330 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:47.330 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:47.330 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:47.330 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:47.330 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:47.330 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:47.330 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:47.330 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:47.330 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:47.330 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:47.331 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:47.331 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:16:47.331 00:16:47.331 --- 10.0.0.3 ping statistics --- 00:16:47.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.331 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:47.331 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:47.331 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:16:47.331 00:16:47.331 --- 10.0.0.4 ping statistics --- 00:16:47.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.331 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:47.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:47.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:47.331 00:16:47.331 --- 10.0.0.1 ping statistics --- 00:16:47.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.331 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:47.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:47.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:16:47.331 00:16:47.331 --- 10.0.0.2 ping statistics --- 00:16:47.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.331 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # return 0 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:47.331 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:47.595 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # nvmfpid=87049 00:16:47.595 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:47.595 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # waitforlisten 87049 00:16:47.595 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 87049 ']' 00:16:47.595 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.595 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:47.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.595 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.595 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:47.595 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:47.595 [2024-12-08 18:34:05.318578] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:47.595 [2024-12-08 18:34:05.318835] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:47.595 [2024-12-08 18:34:05.457294] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:47.595 [2024-12-08 18:34:05.522332] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:47.595 [2024-12-08 18:34:05.522700] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:47.595 [2024-12-08 18:34:05.522840] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:47.595 [2024-12-08 18:34:05.522971] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:47.595 [2024-12-08 18:34:05.523026] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:47.595 [2024-12-08 18:34:05.523265] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.595 [2024-12-08 18:34:05.523365] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:47.853 [2024-12-08 18:34:05.523468] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:47.853 [2024-12-08 18:34:05.523470] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.853 [2024-12-08 18:34:05.577307] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:47.853 Malloc0 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:47.853 Delay0 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:47.853 [2024-12-08 18:34:05.736317] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:47.853 [2024-12-08 18:34:05.765392] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.853 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:48.111 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:16:48.111 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:16:48.111 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:48.111 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:48.111 18:34:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:16:50.015 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:50.015 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:50.015 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:50.015 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:50.015 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:50.015 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:16:50.015 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=87101 00:16:50.015 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:16:50.015 18:34:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:16:50.274 [global] 00:16:50.274 thread=1 00:16:50.274 invalidate=1 00:16:50.274 rw=write 00:16:50.274 time_based=1 00:16:50.274 runtime=60 00:16:50.274 ioengine=libaio 00:16:50.274 direct=1 00:16:50.274 bs=4096 00:16:50.274 iodepth=1 00:16:50.274 norandommap=0 00:16:50.274 numjobs=1 00:16:50.274 00:16:50.274 verify_dump=1 00:16:50.274 verify_backlog=512 00:16:50.274 verify_state_save=0 00:16:50.274 do_verify=1 00:16:50.274 verify=crc32c-intel 00:16:50.274 [job0] 00:16:50.274 filename=/dev/nvme0n1 00:16:50.274 Could not set queue depth (nvme0n1) 00:16:50.274 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:50.274 fio-3.35 00:16:50.274 Starting 1 thread 00:16:53.563 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:16:53.563 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.563 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:53.563 true 00:16:53.563 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.563 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:16:53.563 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.563 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:53.563 true 00:16:53.563 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.563 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:16:53.563 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.563 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:53.563 true 00:16:53.563 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.563 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:16:53.563 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.563 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:53.563 true 00:16:53.563 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.563 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:16:56.098 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:16:56.098 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.098 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:56.098 true 00:16:56.098 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.098 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:16:56.098 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.098 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:56.098 true 00:16:56.098 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.098 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:16:56.098 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.098 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:56.098 true 00:16:56.098 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.098 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:16:56.098 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.098 18:34:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:56.098 true 00:16:56.098 18:34:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.098 18:34:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:16:56.098 18:34:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 87101 00:17:52.347 00:17:52.347 job0: (groupid=0, jobs=1): err= 0: pid=87128: Sun Dec 8 18:35:08 2024 00:17:52.347 read: IOPS=772, BW=3091KiB/s (3165kB/s)(181MiB/60000msec) 00:17:52.347 slat (usec): min=10, max=11615, avg=13.99, stdev=64.52 00:17:52.347 clat (usec): min=151, max=40553k, avg=1093.17, stdev=188329.64 00:17:52.347 lat (usec): min=163, max=40553k, avg=1107.16, stdev=188329.64 00:17:52.347 clat percentiles (usec): 00:17:52.347 | 1.00th=[ 165], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 192], 00:17:52.347 | 30.00th=[ 200], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 223], 00:17:52.347 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 258], 95.00th=[ 273], 00:17:52.347 | 99.00th=[ 310], 99.50th=[ 343], 99.90th=[ 562], 99.95th=[ 644], 00:17:52.347 | 99.99th=[ 898] 00:17:52.347 write: IOPS=776, BW=3106KiB/s (3181kB/s)(182MiB/60000msec); 0 zone resets 00:17:52.347 slat (usec): min=12, max=579, avg=19.06, stdev= 5.97 00:17:52.347 clat (usec): min=113, max=7786, avg=164.01, stdev=48.01 00:17:52.347 lat (usec): min=129, max=7821, avg=183.07, stdev=48.74 00:17:52.347 clat percentiles (usec): 00:17:52.347 | 1.00th=[ 121], 5.00th=[ 129], 10.00th=[ 135], 20.00th=[ 143], 00:17:52.347 | 30.00th=[ 149], 40.00th=[ 155], 50.00th=[ 161], 60.00th=[ 165], 00:17:52.347 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 194], 95.00th=[ 208], 00:17:52.347 | 99.00th=[ 243], 99.50th=[ 265], 99.90th=[ 478], 99.95th=[ 619], 00:17:52.347 | 99.99th=[ 1205] 00:17:52.347 bw ( KiB/s): min= 3440, max=12288, per=100.00%, avg=9347.28, stdev=1618.88, samples=39 00:17:52.347 iops : min= 860, max= 3072, avg=2336.82, stdev=404.72, samples=39 00:17:52.347 lat (usec) : 250=92.68%, 500=7.19%, 750=0.10%, 1000=0.01% 00:17:52.347 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:17:52.347 cpu : usr=0.49%, sys=2.02%, ctx=92971, majf=0, minf=5 00:17:52.347 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:52.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.347 issued rwts: total=46367,46592,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:52.347 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:52.347 00:17:52.347 Run status group 0 (all jobs): 00:17:52.347 READ: bw=3091KiB/s (3165kB/s), 3091KiB/s-3091KiB/s (3165kB/s-3165kB/s), io=181MiB (190MB), run=60000-60000msec 00:17:52.347 WRITE: bw=3106KiB/s (3181kB/s), 3106KiB/s-3106KiB/s (3181kB/s-3181kB/s), io=182MiB (191MB), run=60000-60000msec 00:17:52.347 00:17:52.347 Disk stats (read/write): 00:17:52.347 nvme0n1: ios=46335/46417, merge=0/0, ticks=10388/8012, in_queue=18400, util=99.64% 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:52.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:17:52.347 nvmf hotplug test: fio successful as expected 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:52.347 rmmod nvme_tcp 00:17:52.347 rmmod nvme_fabrics 00:17:52.347 rmmod nvme_keyring 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@513 -- # '[' -n 87049 ']' 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # killprocess 87049 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 87049 ']' 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 87049 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87049 00:17:52.347 killing process with pid 87049 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87049' 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 87049 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 87049 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:17:52.347 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-save 00:17:52.348 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:52.348 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:17:52.348 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:52.348 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:52.348 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:52.348 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:52.348 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:52.348 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:52.348 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:52.348 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:52.348 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:52.348 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:52.348 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:52.348 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:52.348 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:52.348 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:52.348 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:52.348 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:52.348 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.348 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.348 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.348 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:17:52.348 00:17:52.348 real 1m4.295s 00:17:52.348 user 3m57.302s 00:17:52.348 sys 0m15.600s 00:17:52.348 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:52.348 ************************************ 00:17:52.348 END TEST nvmf_initiator_timeout 00:17:52.348 ************************************ 00:17:52.348 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:52.348 18:35:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:17:52.348 18:35:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:52.348 00:17:52.348 real 6m51.395s 00:17:52.348 user 17m2.112s 00:17:52.348 sys 1m49.140s 00:17:52.348 ************************************ 00:17:52.348 END TEST nvmf_target_extra 00:17:52.348 ************************************ 00:17:52.348 18:35:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:52.348 18:35:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:52.348 18:35:09 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:52.348 18:35:09 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:52.348 18:35:09 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:52.348 18:35:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:52.348 ************************************ 00:17:52.348 START TEST nvmf_host 00:17:52.348 ************************************ 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:52.348 * Looking for test storage... 00:17:52.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:52.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.348 --rc genhtml_branch_coverage=1 00:17:52.348 --rc genhtml_function_coverage=1 00:17:52.348 --rc genhtml_legend=1 00:17:52.348 --rc geninfo_all_blocks=1 00:17:52.348 --rc geninfo_unexecuted_blocks=1 00:17:52.348 00:17:52.348 ' 00:17:52.348 18:35:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:52.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.348 --rc genhtml_branch_coverage=1 00:17:52.348 --rc genhtml_function_coverage=1 00:17:52.348 --rc genhtml_legend=1 00:17:52.348 --rc geninfo_all_blocks=1 00:17:52.348 --rc geninfo_unexecuted_blocks=1 00:17:52.349 00:17:52.349 ' 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:52.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.349 --rc genhtml_branch_coverage=1 00:17:52.349 --rc genhtml_function_coverage=1 00:17:52.349 --rc genhtml_legend=1 00:17:52.349 --rc geninfo_all_blocks=1 00:17:52.349 --rc geninfo_unexecuted_blocks=1 00:17:52.349 00:17:52.349 ' 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:52.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.349 --rc genhtml_branch_coverage=1 00:17:52.349 --rc genhtml_function_coverage=1 00:17:52.349 --rc genhtml_legend=1 00:17:52.349 --rc geninfo_all_blocks=1 00:17:52.349 --rc geninfo_unexecuted_blocks=1 00:17:52.349 00:17:52.349 ' 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:52.349 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.349 ************************************ 00:17:52.349 START TEST nvmf_identify 00:17:52.349 ************************************ 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:52.349 * Looking for test storage... 00:17:52.349 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.349 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:52.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.350 --rc genhtml_branch_coverage=1 00:17:52.350 --rc genhtml_function_coverage=1 00:17:52.350 --rc genhtml_legend=1 00:17:52.350 --rc geninfo_all_blocks=1 00:17:52.350 --rc geninfo_unexecuted_blocks=1 00:17:52.350 00:17:52.350 ' 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:52.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.350 --rc genhtml_branch_coverage=1 00:17:52.350 --rc genhtml_function_coverage=1 00:17:52.350 --rc genhtml_legend=1 00:17:52.350 --rc geninfo_all_blocks=1 00:17:52.350 --rc geninfo_unexecuted_blocks=1 00:17:52.350 00:17:52.350 ' 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:52.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.350 --rc genhtml_branch_coverage=1 00:17:52.350 --rc genhtml_function_coverage=1 00:17:52.350 --rc genhtml_legend=1 00:17:52.350 --rc geninfo_all_blocks=1 00:17:52.350 --rc geninfo_unexecuted_blocks=1 00:17:52.350 00:17:52.350 ' 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:52.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.350 --rc genhtml_branch_coverage=1 00:17:52.350 --rc genhtml_function_coverage=1 00:17:52.350 --rc genhtml_legend=1 00:17:52.350 --rc geninfo_all_blocks=1 00:17:52.350 --rc geninfo_unexecuted_blocks=1 00:17:52.350 00:17:52.350 ' 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:17:52.350 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:52.351 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@456 -- # nvmf_veth_init 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:52.351 Cannot find device "nvmf_init_br" 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:52.351 Cannot find device "nvmf_init_br2" 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:52.351 Cannot find device "nvmf_tgt_br" 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:52.351 Cannot find device "nvmf_tgt_br2" 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:52.351 Cannot find device "nvmf_init_br" 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:52.351 Cannot find device "nvmf_init_br2" 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:52.351 Cannot find device "nvmf_tgt_br" 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:52.351 Cannot find device "nvmf_tgt_br2" 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:52.351 Cannot find device "nvmf_br" 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:52.351 Cannot find device "nvmf_init_if" 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:52.351 Cannot find device "nvmf_init_if2" 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:52.351 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:52.351 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:52.351 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:52.352 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:52.352 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.113 ms 00:17:52.352 00:17:52.352 --- 10.0.0.3 ping statistics --- 00:17:52.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.352 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:52.352 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:52.352 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:17:52.352 00:17:52.352 --- 10.0.0.4 ping statistics --- 00:17:52.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.352 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:52.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:52.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:17:52.352 00:17:52.352 --- 10.0.0.1 ping statistics --- 00:17:52.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.352 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:52.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:52.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:17:52.352 00:17:52.352 --- 10.0.0.2 ping statistics --- 00:17:52.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.352 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # return 0 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=88051 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 88051 00:17:52.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 88051 ']' 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:52.352 18:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:52.352 [2024-12-08 18:35:09.950795] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:52.352 [2024-12-08 18:35:09.951058] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.352 [2024-12-08 18:35:10.086397] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:52.352 [2024-12-08 18:35:10.155329] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.352 [2024-12-08 18:35:10.155603] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.352 [2024-12-08 18:35:10.155686] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.352 [2024-12-08 18:35:10.155789] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.352 [2024-12-08 18:35:10.155880] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.352 [2024-12-08 18:35:10.157451] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.352 [2024-12-08 18:35:10.157657] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.352 [2024-12-08 18:35:10.157798] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:52.352 [2024-12-08 18:35:10.157807] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.352 [2024-12-08 18:35:10.210087] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:52.612 [2024-12-08 18:35:10.289114] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:52.612 Malloc0 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:52.612 [2024-12-08 18:35:10.389073] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:52.612 [ 00:17:52.612 { 00:17:52.612 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:52.612 "subtype": "Discovery", 00:17:52.612 "listen_addresses": [ 00:17:52.612 { 00:17:52.612 "trtype": "TCP", 00:17:52.612 "adrfam": "IPv4", 00:17:52.612 "traddr": "10.0.0.3", 00:17:52.612 "trsvcid": "4420" 00:17:52.612 } 00:17:52.612 ], 00:17:52.612 "allow_any_host": true, 00:17:52.612 "hosts": [] 00:17:52.612 }, 00:17:52.612 { 00:17:52.612 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.612 "subtype": "NVMe", 00:17:52.612 "listen_addresses": [ 00:17:52.612 { 00:17:52.612 "trtype": "TCP", 00:17:52.612 "adrfam": "IPv4", 00:17:52.612 "traddr": "10.0.0.3", 00:17:52.612 "trsvcid": "4420" 00:17:52.612 } 00:17:52.612 ], 00:17:52.612 "allow_any_host": true, 00:17:52.612 "hosts": [], 00:17:52.612 "serial_number": "SPDK00000000000001", 00:17:52.612 "model_number": "SPDK bdev Controller", 00:17:52.612 "max_namespaces": 32, 00:17:52.612 "min_cntlid": 1, 00:17:52.612 "max_cntlid": 65519, 00:17:52.612 "namespaces": [ 00:17:52.612 { 00:17:52.612 "nsid": 1, 00:17:52.612 "bdev_name": "Malloc0", 00:17:52.612 "name": "Malloc0", 00:17:52.612 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:52.612 "eui64": "ABCDEF0123456789", 00:17:52.612 "uuid": "f8c12de1-d31a-43e8-b665-08c5d77fbb91" 00:17:52.612 } 00:17:52.612 ] 00:17:52.612 } 00:17:52.612 ] 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.612 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:52.612 [2024-12-08 18:35:10.444102] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:52.612 [2024-12-08 18:35:10.444166] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88084 ] 00:17:52.877 [2024-12-08 18:35:10.578918] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:17:52.877 [2024-12-08 18:35:10.578996] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:52.877 [2024-12-08 18:35:10.579003] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:52.877 [2024-12-08 18:35:10.579014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:52.877 [2024-12-08 18:35:10.579022] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:52.877 [2024-12-08 18:35:10.579360] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:17:52.877 [2024-12-08 18:35:10.579455] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1645ac0 0 00:17:52.877 [2024-12-08 18:35:10.584525] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:52.877 [2024-12-08 18:35:10.584552] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:52.877 [2024-12-08 18:35:10.584575] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:52.877 [2024-12-08 18:35:10.584579] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:52.877 [2024-12-08 18:35:10.584613] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.877 [2024-12-08 18:35:10.584621] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.877 [2024-12-08 18:35:10.584625] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1645ac0) 00:17:52.877 [2024-12-08 18:35:10.584638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:52.877 [2024-12-08 18:35:10.584669] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167e7c0, cid 0, qid 0 00:17:52.877 [2024-12-08 18:35:10.600423] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.877 [2024-12-08 18:35:10.600446] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.877 [2024-12-08 18:35:10.600467] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.877 [2024-12-08 18:35:10.600472] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167e7c0) on tqpair=0x1645ac0 00:17:52.877 [2024-12-08 18:35:10.600485] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:52.877 [2024-12-08 18:35:10.600493] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:17:52.877 [2024-12-08 18:35:10.600499] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:17:52.877 [2024-12-08 18:35:10.600514] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.877 [2024-12-08 18:35:10.600520] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.877 [2024-12-08 18:35:10.600524] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1645ac0) 00:17:52.877 [2024-12-08 18:35:10.600532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.877 [2024-12-08 18:35:10.600559] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167e7c0, cid 0, qid 0 00:17:52.877 [2024-12-08 18:35:10.600626] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.877 [2024-12-08 18:35:10.600633] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.877 [2024-12-08 18:35:10.600637] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.877 [2024-12-08 18:35:10.600641] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167e7c0) on tqpair=0x1645ac0 00:17:52.877 [2024-12-08 18:35:10.600662] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:17:52.877 [2024-12-08 18:35:10.600670] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:17:52.877 [2024-12-08 18:35:10.600694] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.877 [2024-12-08 18:35:10.600698] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.877 [2024-12-08 18:35:10.600702] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1645ac0) 00:17:52.877 [2024-12-08 18:35:10.600710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.877 [2024-12-08 18:35:10.600728] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167e7c0, cid 0, qid 0 00:17:52.877 [2024-12-08 18:35:10.600780] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.877 [2024-12-08 18:35:10.600786] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.877 [2024-12-08 18:35:10.600790] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.877 [2024-12-08 18:35:10.600794] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167e7c0) on tqpair=0x1645ac0 00:17:52.877 [2024-12-08 18:35:10.600800] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:17:52.877 [2024-12-08 18:35:10.600809] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:17:52.877 [2024-12-08 18:35:10.600816] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.877 [2024-12-08 18:35:10.600820] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.877 [2024-12-08 18:35:10.600824] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1645ac0) 00:17:52.877 [2024-12-08 18:35:10.600831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.877 [2024-12-08 18:35:10.600848] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167e7c0, cid 0, qid 0 00:17:52.877 [2024-12-08 18:35:10.600892] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.877 [2024-12-08 18:35:10.600899] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.877 [2024-12-08 18:35:10.600903] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.877 [2024-12-08 18:35:10.600907] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167e7c0) on tqpair=0x1645ac0 00:17:52.877 [2024-12-08 18:35:10.600912] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:52.877 [2024-12-08 18:35:10.600922] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.877 [2024-12-08 18:35:10.600927] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.877 [2024-12-08 18:35:10.600931] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1645ac0) 00:17:52.878 [2024-12-08 18:35:10.600938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.878 [2024-12-08 18:35:10.600954] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167e7c0, cid 0, qid 0 00:17:52.878 [2024-12-08 18:35:10.600999] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.878 [2024-12-08 18:35:10.601006] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.878 [2024-12-08 18:35:10.601009] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.878 [2024-12-08 18:35:10.601013] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167e7c0) on tqpair=0x1645ac0 00:17:52.878 [2024-12-08 18:35:10.601018] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:17:52.878 [2024-12-08 18:35:10.601023] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:17:52.878 [2024-12-08 18:35:10.601031] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:52.878 [2024-12-08 18:35:10.601136] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:17:52.878 [2024-12-08 18:35:10.601142] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:52.878 [2024-12-08 18:35:10.601152] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.878 [2024-12-08 18:35:10.601156] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.878 [2024-12-08 18:35:10.601160] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1645ac0) 00:17:52.878 [2024-12-08 18:35:10.601167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.878 [2024-12-08 18:35:10.601184] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167e7c0, cid 0, qid 0 00:17:52.878 [2024-12-08 18:35:10.601234] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.878 [2024-12-08 18:35:10.601241] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.878 [2024-12-08 18:35:10.601244] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.878 [2024-12-08 18:35:10.601249] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167e7c0) on tqpair=0x1645ac0 00:17:52.878 [2024-12-08 18:35:10.601254] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:52.878 [2024-12-08 18:35:10.601264] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.878 [2024-12-08 18:35:10.601269] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.878 [2024-12-08 18:35:10.601273] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1645ac0) 00:17:52.878 [2024-12-08 18:35:10.601280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.878 [2024-12-08 18:35:10.601296] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167e7c0, cid 0, qid 0 00:17:52.878 [2024-12-08 18:35:10.601341] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.878 [2024-12-08 18:35:10.601348] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.878 [2024-12-08 18:35:10.601352] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.878 [2024-12-08 18:35:10.601356] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167e7c0) on tqpair=0x1645ac0 00:17:52.878 [2024-12-08 18:35:10.601361] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:52.878 [2024-12-08 18:35:10.601366] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:17:52.878 [2024-12-08 18:35:10.601375] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:17:52.878 [2024-12-08 18:35:10.601391] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:17:52.878 [2024-12-08 18:35:10.601401] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.878 [2024-12-08 18:35:10.601406] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1645ac0) 00:17:52.878 [2024-12-08 18:35:10.601414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.878 [2024-12-08 18:35:10.601433] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167e7c0, cid 0, qid 0 00:17:52.878 [2024-12-08 18:35:10.601549] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:52.878 [2024-12-08 18:35:10.601558] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:52.878 [2024-12-08 18:35:10.601562] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:52.878 [2024-12-08 18:35:10.601567] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1645ac0): datao=0, datal=4096, cccid=0 00:17:52.878 [2024-12-08 18:35:10.601572] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x167e7c0) on tqpair(0x1645ac0): expected_datao=0, payload_size=4096 00:17:52.878 [2024-12-08 18:35:10.601577] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.878 [2024-12-08 18:35:10.601586] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:52.878 [2024-12-08 18:35:10.601590] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:52.878 [2024-12-08 18:35:10.601599] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.878 [2024-12-08 18:35:10.601605] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.878 [2024-12-08 18:35:10.601609] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.878 [2024-12-08 18:35:10.601613] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167e7c0) on tqpair=0x1645ac0 00:17:52.878 [2024-12-08 18:35:10.601622] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:17:52.878 [2024-12-08 18:35:10.601628] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:17:52.878 [2024-12-08 18:35:10.601633] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:17:52.878 [2024-12-08 18:35:10.601638] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:17:52.878 [2024-12-08 18:35:10.601643] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:17:52.878 [2024-12-08 18:35:10.601649] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:17:52.878 [2024-12-08 18:35:10.601658] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:17:52.878 [2024-12-08 18:35:10.601670] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.878 [2024-12-08 18:35:10.601676] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.878 [2024-12-08 18:35:10.601680] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1645ac0) 00:17:52.878 [2024-12-08 18:35:10.601688] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:52.878 [2024-12-08 18:35:10.601709] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167e7c0, cid 0, qid 0 00:17:52.878 [2024-12-08 18:35:10.601767] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.878 [2024-12-08 18:35:10.601774] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.878 [2024-12-08 18:35:10.601777] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.878 [2024-12-08 18:35:10.601782] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167e7c0) on tqpair=0x1645ac0 00:17:52.878 [2024-12-08 18:35:10.601790] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.878 [2024-12-08 18:35:10.601794] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.878 [2024-12-08 18:35:10.601798] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1645ac0) 00:17:52.878 [2024-12-08 18:35:10.601805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.878 [2024-12-08 18:35:10.601812] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.878 [2024-12-08 18:35:10.601816] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.878 [2024-12-08 18:35:10.601820] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1645ac0) 00:17:52.878 [2024-12-08 18:35:10.601826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.878 [2024-12-08 18:35:10.601833] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.878 [2024-12-08 18:35:10.601837] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.878 [2024-12-08 18:35:10.601841] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1645ac0) 00:17:52.878 [2024-12-08 18:35:10.601847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.878 [2024-12-08 18:35:10.601853] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.878 [2024-12-08 18:35:10.601857] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.878 [2024-12-08 18:35:10.601861] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.878 [2024-12-08 18:35:10.601867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.878 [2024-12-08 18:35:10.601873] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:17:52.878 [2024-12-08 18:35:10.601886] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:52.878 [2024-12-08 18:35:10.601894] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.878 [2024-12-08 18:35:10.601899] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1645ac0) 00:17:52.878 [2024-12-08 18:35:10.601918] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.878 [2024-12-08 18:35:10.601938] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167e7c0, cid 0, qid 0 00:17:52.878 [2024-12-08 18:35:10.601945] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167e940, cid 1, qid 0 00:17:52.878 [2024-12-08 18:35:10.601950] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167eac0, cid 2, qid 0 00:17:52.878 [2024-12-08 18:35:10.601955] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.878 [2024-12-08 18:35:10.601960] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167edc0, cid 4, qid 0 00:17:52.878 [2024-12-08 18:35:10.602052] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.878 [2024-12-08 18:35:10.602059] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.878 [2024-12-08 18:35:10.602063] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.878 [2024-12-08 18:35:10.602067] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167edc0) on tqpair=0x1645ac0 00:17:52.878 [2024-12-08 18:35:10.602073] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:17:52.879 [2024-12-08 18:35:10.602078] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:17:52.879 [2024-12-08 18:35:10.602090] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.879 [2024-12-08 18:35:10.602094] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1645ac0) 00:17:52.879 [2024-12-08 18:35:10.602102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.879 [2024-12-08 18:35:10.602119] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167edc0, cid 4, qid 0 00:17:52.879 [2024-12-08 18:35:10.602181] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:52.879 [2024-12-08 18:35:10.602187] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:52.879 [2024-12-08 18:35:10.602191] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:52.879 [2024-12-08 18:35:10.602195] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1645ac0): datao=0, datal=4096, cccid=4 00:17:52.879 [2024-12-08 18:35:10.602200] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x167edc0) on tqpair(0x1645ac0): expected_datao=0, payload_size=4096 00:17:52.879 [2024-12-08 18:35:10.602205] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.879 [2024-12-08 18:35:10.602212] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:52.879 [2024-12-08 18:35:10.602216] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:52.879 [2024-12-08 18:35:10.602225] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.879 [2024-12-08 18:35:10.602231] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.879 [2024-12-08 18:35:10.602235] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.879 [2024-12-08 18:35:10.602239] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167edc0) on tqpair=0x1645ac0 00:17:52.879 [2024-12-08 18:35:10.602252] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:17:52.879 [2024-12-08 18:35:10.602280] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.879 [2024-12-08 18:35:10.602286] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1645ac0) 00:17:52.879 [2024-12-08 18:35:10.602294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.879 [2024-12-08 18:35:10.602302] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.879 [2024-12-08 18:35:10.602306] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.879 [2024-12-08 18:35:10.602310] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1645ac0) 00:17:52.879 [2024-12-08 18:35:10.602316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.879 [2024-12-08 18:35:10.602340] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167edc0, cid 4, qid 0 00:17:52.879 [2024-12-08 18:35:10.602347] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ef40, cid 5, qid 0 00:17:52.879 [2024-12-08 18:35:10.602457] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:52.879 [2024-12-08 18:35:10.602467] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:52.879 [2024-12-08 18:35:10.602471] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:52.879 [2024-12-08 18:35:10.602475] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1645ac0): datao=0, datal=1024, cccid=4 00:17:52.879 [2024-12-08 18:35:10.602480] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x167edc0) on tqpair(0x1645ac0): expected_datao=0, payload_size=1024 00:17:52.879 [2024-12-08 18:35:10.602485] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.879 [2024-12-08 18:35:10.602492] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:52.879 [2024-12-08 18:35:10.602496] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:52.879 [2024-12-08 18:35:10.602502] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.879 [2024-12-08 18:35:10.602508] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.879 [2024-12-08 18:35:10.602512] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.879 [2024-12-08 18:35:10.602516] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ef40) on tqpair=0x1645ac0 00:17:52.879 [2024-12-08 18:35:10.602535] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.879 [2024-12-08 18:35:10.602543] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.879 [2024-12-08 18:35:10.602546] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.879 [2024-12-08 18:35:10.602550] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167edc0) on tqpair=0x1645ac0 00:17:52.879 [2024-12-08 18:35:10.602562] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.879 [2024-12-08 18:35:10.602567] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1645ac0) 00:17:52.879 [2024-12-08 18:35:10.602574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.879 [2024-12-08 18:35:10.602598] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167edc0, cid 4, qid 0 00:17:52.879 [2024-12-08 18:35:10.602668] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:52.879 [2024-12-08 18:35:10.602675] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:52.879 [2024-12-08 18:35:10.602679] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:52.879 [2024-12-08 18:35:10.602683] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1645ac0): datao=0, datal=3072, cccid=4 00:17:52.879 [2024-12-08 18:35:10.602688] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x167edc0) on tqpair(0x1645ac0): expected_datao=0, payload_size=3072 00:17:52.879 [2024-12-08 18:35:10.602693] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.879 [2024-12-08 18:35:10.602700] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:52.879 [2024-12-08 18:35:10.602704] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:52.879 [2024-12-08 18:35:10.602712] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.879 [2024-12-08 18:35:10.602718] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.879 [2024-12-08 18:35:10.602722] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.879 [2024-12-08 18:35:10.602726] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167edc0) on tqpair=0x1645ac0 00:17:52.879 [2024-12-08 18:35:10.602736] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.879 [2024-12-08 18:35:10.602741] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1645ac0) 00:17:52.879 [2024-12-08 18:35:10.602748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.879 [2024-12-08 18:35:10.602770] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167edc0, cid 4, qid 0 00:17:52.879 [2024-12-08 18:35:10.602836] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:52.879 [2024-12-08 18:35:10.602843] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:52.879 [2024-12-08 18:35:10.602847] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:52.879 [2024-12-08 18:35:10.602851] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1645ac0): datao=0, datal=8, cccid=4 00:17:52.879 [2024-12-08 18:35:10.602856] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x167edc0) on tqpair(0x1645ac0): expected_datao=0, payload_size=8 00:17:52.879 [2024-12-08 18:35:10.602860] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.879 [2024-12-08 18:35:10.602868] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:52.879 [2024-12-08 18:35:10.602871] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:52.879 [2024-12-08 18:35:10.602886] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.879 [2024-12-08 18:35:10.602893] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.879 [2024-12-08 18:35:10.602896] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.879 [2024-12-08 18:35:10.602901] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167edc0) on tqpair=0x1645ac0 00:17:52.879 ===================================================== 00:17:52.879 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:52.879 ===================================================== 00:17:52.879 Controller Capabilities/Features 00:17:52.879 ================================ 00:17:52.879 Vendor ID: 0000 00:17:52.879 Subsystem Vendor ID: 0000 00:17:52.879 Serial Number: .................... 00:17:52.879 Model Number: ........................................ 00:17:52.879 Firmware Version: 24.09.1 00:17:52.879 Recommended Arb Burst: 0 00:17:52.879 IEEE OUI Identifier: 00 00 00 00:17:52.879 Multi-path I/O 00:17:52.879 May have multiple subsystem ports: No 00:17:52.879 May have multiple controllers: No 00:17:52.879 Associated with SR-IOV VF: No 00:17:52.879 Max Data Transfer Size: 131072 00:17:52.879 Max Number of Namespaces: 0 00:17:52.879 Max Number of I/O Queues: 1024 00:17:52.879 NVMe Specification Version (VS): 1.3 00:17:52.879 NVMe Specification Version (Identify): 1.3 00:17:52.879 Maximum Queue Entries: 128 00:17:52.879 Contiguous Queues Required: Yes 00:17:52.879 Arbitration Mechanisms Supported 00:17:52.879 Weighted Round Robin: Not Supported 00:17:52.879 Vendor Specific: Not Supported 00:17:52.879 Reset Timeout: 15000 ms 00:17:52.879 Doorbell Stride: 4 bytes 00:17:52.879 NVM Subsystem Reset: Not Supported 00:17:52.879 Command Sets Supported 00:17:52.879 NVM Command Set: Supported 00:17:52.879 Boot Partition: Not Supported 00:17:52.879 Memory Page Size Minimum: 4096 bytes 00:17:52.879 Memory Page Size Maximum: 4096 bytes 00:17:52.879 Persistent Memory Region: Not Supported 00:17:52.879 Optional Asynchronous Events Supported 00:17:52.879 Namespace Attribute Notices: Not Supported 00:17:52.879 Firmware Activation Notices: Not Supported 00:17:52.879 ANA Change Notices: Not Supported 00:17:52.879 PLE Aggregate Log Change Notices: Not Supported 00:17:52.879 LBA Status Info Alert Notices: Not Supported 00:17:52.879 EGE Aggregate Log Change Notices: Not Supported 00:17:52.879 Normal NVM Subsystem Shutdown event: Not Supported 00:17:52.879 Zone Descriptor Change Notices: Not Supported 00:17:52.879 Discovery Log Change Notices: Supported 00:17:52.879 Controller Attributes 00:17:52.879 128-bit Host Identifier: Not Supported 00:17:52.879 Non-Operational Permissive Mode: Not Supported 00:17:52.879 NVM Sets: Not Supported 00:17:52.880 Read Recovery Levels: Not Supported 00:17:52.880 Endurance Groups: Not Supported 00:17:52.880 Predictable Latency Mode: Not Supported 00:17:52.880 Traffic Based Keep ALive: Not Supported 00:17:52.880 Namespace Granularity: Not Supported 00:17:52.880 SQ Associations: Not Supported 00:17:52.880 UUID List: Not Supported 00:17:52.880 Multi-Domain Subsystem: Not Supported 00:17:52.880 Fixed Capacity Management: Not Supported 00:17:52.880 Variable Capacity Management: Not Supported 00:17:52.880 Delete Endurance Group: Not Supported 00:17:52.880 Delete NVM Set: Not Supported 00:17:52.880 Extended LBA Formats Supported: Not Supported 00:17:52.880 Flexible Data Placement Supported: Not Supported 00:17:52.880 00:17:52.880 Controller Memory Buffer Support 00:17:52.880 ================================ 00:17:52.880 Supported: No 00:17:52.880 00:17:52.880 Persistent Memory Region Support 00:17:52.880 ================================ 00:17:52.880 Supported: No 00:17:52.880 00:17:52.880 Admin Command Set Attributes 00:17:52.880 ============================ 00:17:52.880 Security Send/Receive: Not Supported 00:17:52.880 Format NVM: Not Supported 00:17:52.880 Firmware Activate/Download: Not Supported 00:17:52.880 Namespace Management: Not Supported 00:17:52.880 Device Self-Test: Not Supported 00:17:52.880 Directives: Not Supported 00:17:52.880 NVMe-MI: Not Supported 00:17:52.880 Virtualization Management: Not Supported 00:17:52.880 Doorbell Buffer Config: Not Supported 00:17:52.880 Get LBA Status Capability: Not Supported 00:17:52.880 Command & Feature Lockdown Capability: Not Supported 00:17:52.880 Abort Command Limit: 1 00:17:52.880 Async Event Request Limit: 4 00:17:52.880 Number of Firmware Slots: N/A 00:17:52.880 Firmware Slot 1 Read-Only: N/A 00:17:52.880 Firmware Activation Without Reset: N/A 00:17:52.880 Multiple Update Detection Support: N/A 00:17:52.880 Firmware Update Granularity: No Information Provided 00:17:52.880 Per-Namespace SMART Log: No 00:17:52.880 Asymmetric Namespace Access Log Page: Not Supported 00:17:52.880 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:52.880 Command Effects Log Page: Not Supported 00:17:52.880 Get Log Page Extended Data: Supported 00:17:52.880 Telemetry Log Pages: Not Supported 00:17:52.880 Persistent Event Log Pages: Not Supported 00:17:52.880 Supported Log Pages Log Page: May Support 00:17:52.880 Commands Supported & Effects Log Page: Not Supported 00:17:52.880 Feature Identifiers & Effects Log Page:May Support 00:17:52.880 NVMe-MI Commands & Effects Log Page: May Support 00:17:52.880 Data Area 4 for Telemetry Log: Not Supported 00:17:52.880 Error Log Page Entries Supported: 128 00:17:52.880 Keep Alive: Not Supported 00:17:52.880 00:17:52.880 NVM Command Set Attributes 00:17:52.880 ========================== 00:17:52.880 Submission Queue Entry Size 00:17:52.880 Max: 1 00:17:52.880 Min: 1 00:17:52.880 Completion Queue Entry Size 00:17:52.880 Max: 1 00:17:52.880 Min: 1 00:17:52.880 Number of Namespaces: 0 00:17:52.880 Compare Command: Not Supported 00:17:52.880 Write Uncorrectable Command: Not Supported 00:17:52.880 Dataset Management Command: Not Supported 00:17:52.880 Write Zeroes Command: Not Supported 00:17:52.880 Set Features Save Field: Not Supported 00:17:52.880 Reservations: Not Supported 00:17:52.880 Timestamp: Not Supported 00:17:52.880 Copy: Not Supported 00:17:52.880 Volatile Write Cache: Not Present 00:17:52.880 Atomic Write Unit (Normal): 1 00:17:52.880 Atomic Write Unit (PFail): 1 00:17:52.880 Atomic Compare & Write Unit: 1 00:17:52.880 Fused Compare & Write: Supported 00:17:52.880 Scatter-Gather List 00:17:52.880 SGL Command Set: Supported 00:17:52.880 SGL Keyed: Supported 00:17:52.880 SGL Bit Bucket Descriptor: Not Supported 00:17:52.880 SGL Metadata Pointer: Not Supported 00:17:52.880 Oversized SGL: Not Supported 00:17:52.880 SGL Metadata Address: Not Supported 00:17:52.880 SGL Offset: Supported 00:17:52.880 Transport SGL Data Block: Not Supported 00:17:52.880 Replay Protected Memory Block: Not Supported 00:17:52.880 00:17:52.880 Firmware Slot Information 00:17:52.880 ========================= 00:17:52.880 Active slot: 0 00:17:52.880 00:17:52.880 00:17:52.880 Error Log 00:17:52.880 ========= 00:17:52.880 00:17:52.880 Active Namespaces 00:17:52.880 ================= 00:17:52.880 Discovery Log Page 00:17:52.880 ================== 00:17:52.880 Generation Counter: 2 00:17:52.880 Number of Records: 2 00:17:52.880 Record Format: 0 00:17:52.880 00:17:52.880 Discovery Log Entry 0 00:17:52.880 ---------------------- 00:17:52.880 Transport Type: 3 (TCP) 00:17:52.880 Address Family: 1 (IPv4) 00:17:52.880 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:52.880 Entry Flags: 00:17:52.880 Duplicate Returned Information: 1 00:17:52.880 Explicit Persistent Connection Support for Discovery: 1 00:17:52.880 Transport Requirements: 00:17:52.880 Secure Channel: Not Required 00:17:52.880 Port ID: 0 (0x0000) 00:17:52.880 Controller ID: 65535 (0xffff) 00:17:52.880 Admin Max SQ Size: 128 00:17:52.880 Transport Service Identifier: 4420 00:17:52.880 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:52.880 Transport Address: 10.0.0.3 00:17:52.880 Discovery Log Entry 1 00:17:52.880 ---------------------- 00:17:52.880 Transport Type: 3 (TCP) 00:17:52.880 Address Family: 1 (IPv4) 00:17:52.880 Subsystem Type: 2 (NVM Subsystem) 00:17:52.880 Entry Flags: 00:17:52.880 Duplicate Returned Information: 0 00:17:52.880 Explicit Persistent Connection Support for Discovery: 0 00:17:52.880 Transport Requirements: 00:17:52.880 Secure Channel: Not Required 00:17:52.880 Port ID: 0 (0x0000) 00:17:52.880 Controller ID: 65535 (0xffff) 00:17:52.880 Admin Max SQ Size: 128 00:17:52.880 Transport Service Identifier: 4420 00:17:52.880 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:52.880 Transport Address: 10.0.0.3 [2024-12-08 18:35:10.602994] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:17:52.880 [2024-12-08 18:35:10.603009] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167e7c0) on tqpair=0x1645ac0 00:17:52.880 [2024-12-08 18:35:10.603016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.880 [2024-12-08 18:35:10.603022] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167e940) on tqpair=0x1645ac0 00:17:52.880 [2024-12-08 18:35:10.603027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.880 [2024-12-08 18:35:10.603033] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167eac0) on tqpair=0x1645ac0 00:17:52.880 [2024-12-08 18:35:10.603038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.880 [2024-12-08 18:35:10.603043] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.880 [2024-12-08 18:35:10.603048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.880 [2024-12-08 18:35:10.603057] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.880 [2024-12-08 18:35:10.603062] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.880 [2024-12-08 18:35:10.603066] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.880 [2024-12-08 18:35:10.603074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.880 [2024-12-08 18:35:10.603096] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.880 [2024-12-08 18:35:10.603144] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.880 [2024-12-08 18:35:10.603152] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.880 [2024-12-08 18:35:10.603155] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.880 [2024-12-08 18:35:10.603160] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.880 [2024-12-08 18:35:10.603168] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.880 [2024-12-08 18:35:10.603172] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.880 [2024-12-08 18:35:10.603176] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.880 [2024-12-08 18:35:10.603184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.880 [2024-12-08 18:35:10.603205] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.880 [2024-12-08 18:35:10.603266] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.880 [2024-12-08 18:35:10.603273] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.880 [2024-12-08 18:35:10.603276] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.880 [2024-12-08 18:35:10.603281] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.880 [2024-12-08 18:35:10.603286] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:17:52.880 [2024-12-08 18:35:10.603291] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:17:52.881 [2024-12-08 18:35:10.603301] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.603305] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.603309] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.881 [2024-12-08 18:35:10.603317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.881 [2024-12-08 18:35:10.603333] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.881 [2024-12-08 18:35:10.603380] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.881 [2024-12-08 18:35:10.603386] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.881 [2024-12-08 18:35:10.603390] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.603394] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.881 [2024-12-08 18:35:10.603420] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.603427] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.603431] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.881 [2024-12-08 18:35:10.603439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.881 [2024-12-08 18:35:10.603458] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.881 [2024-12-08 18:35:10.603506] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.881 [2024-12-08 18:35:10.603513] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.881 [2024-12-08 18:35:10.603516] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.603520] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.881 [2024-12-08 18:35:10.603531] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.603535] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.603539] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.881 [2024-12-08 18:35:10.603547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.881 [2024-12-08 18:35:10.603563] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.881 [2024-12-08 18:35:10.603613] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.881 [2024-12-08 18:35:10.603620] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.881 [2024-12-08 18:35:10.603624] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.603628] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.881 [2024-12-08 18:35:10.603639] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.603643] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.603647] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.881 [2024-12-08 18:35:10.603655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.881 [2024-12-08 18:35:10.603671] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.881 [2024-12-08 18:35:10.603714] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.881 [2024-12-08 18:35:10.603721] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.881 [2024-12-08 18:35:10.603725] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.603729] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.881 [2024-12-08 18:35:10.603739] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.603744] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.603748] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.881 [2024-12-08 18:35:10.603755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.881 [2024-12-08 18:35:10.603771] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.881 [2024-12-08 18:35:10.603870] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.881 [2024-12-08 18:35:10.603878] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.881 [2024-12-08 18:35:10.603881] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.603886] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.881 [2024-12-08 18:35:10.603896] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.603901] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.603905] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.881 [2024-12-08 18:35:10.603912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.881 [2024-12-08 18:35:10.603930] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.881 [2024-12-08 18:35:10.603977] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.881 [2024-12-08 18:35:10.603984] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.881 [2024-12-08 18:35:10.603988] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.603992] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.881 [2024-12-08 18:35:10.604002] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.604007] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.604011] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.881 [2024-12-08 18:35:10.604018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.881 [2024-12-08 18:35:10.604034] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.881 [2024-12-08 18:35:10.604081] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.881 [2024-12-08 18:35:10.604088] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.881 [2024-12-08 18:35:10.604092] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.604096] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.881 [2024-12-08 18:35:10.604106] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.604111] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.604115] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.881 [2024-12-08 18:35:10.604137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.881 [2024-12-08 18:35:10.604153] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.881 [2024-12-08 18:35:10.604201] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.881 [2024-12-08 18:35:10.604208] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.881 [2024-12-08 18:35:10.604211] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.604215] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.881 [2024-12-08 18:35:10.604225] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.604230] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.604233] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.881 [2024-12-08 18:35:10.604241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.881 [2024-12-08 18:35:10.604256] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.881 [2024-12-08 18:35:10.604319] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.881 [2024-12-08 18:35:10.604326] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.881 [2024-12-08 18:35:10.604330] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.604334] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.881 [2024-12-08 18:35:10.604344] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.604349] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.604353] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.881 [2024-12-08 18:35:10.604360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.881 [2024-12-08 18:35:10.604376] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.881 [2024-12-08 18:35:10.604428] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.881 [2024-12-08 18:35:10.604435] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.881 [2024-12-08 18:35:10.604439] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.604443] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.881 [2024-12-08 18:35:10.604467] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.604474] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.604478] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.881 [2024-12-08 18:35:10.604486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.881 [2024-12-08 18:35:10.604505] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.881 [2024-12-08 18:35:10.604552] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.881 [2024-12-08 18:35:10.604559] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.881 [2024-12-08 18:35:10.604562] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.604567] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.881 [2024-12-08 18:35:10.604577] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.604582] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.881 [2024-12-08 18:35:10.604586] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.881 [2024-12-08 18:35:10.604593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.882 [2024-12-08 18:35:10.604610] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.882 [2024-12-08 18:35:10.604657] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.882 [2024-12-08 18:35:10.604663] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.882 [2024-12-08 18:35:10.604667] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.882 [2024-12-08 18:35:10.604671] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.882 [2024-12-08 18:35:10.604682] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.882 [2024-12-08 18:35:10.604687] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.882 [2024-12-08 18:35:10.604690] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.882 [2024-12-08 18:35:10.604698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.882 [2024-12-08 18:35:10.604714] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.882 [2024-12-08 18:35:10.604760] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.882 [2024-12-08 18:35:10.604767] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.882 [2024-12-08 18:35:10.604770] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.882 [2024-12-08 18:35:10.604775] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.882 [2024-12-08 18:35:10.604785] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.882 [2024-12-08 18:35:10.604790] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.882 [2024-12-08 18:35:10.604794] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.882 [2024-12-08 18:35:10.604801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.882 [2024-12-08 18:35:10.604818] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.882 [2024-12-08 18:35:10.604865] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.882 [2024-12-08 18:35:10.604872] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.882 [2024-12-08 18:35:10.604876] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.882 [2024-12-08 18:35:10.604880] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.882 [2024-12-08 18:35:10.604890] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.882 [2024-12-08 18:35:10.604895] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.882 [2024-12-08 18:35:10.604899] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.882 [2024-12-08 18:35:10.604906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.882 [2024-12-08 18:35:10.604922] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.882 [2024-12-08 18:35:10.604968] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.882 [2024-12-08 18:35:10.604975] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.882 [2024-12-08 18:35:10.604979] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.882 [2024-12-08 18:35:10.604983] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.882 [2024-12-08 18:35:10.604993] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.882 [2024-12-08 18:35:10.604998] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.882 [2024-12-08 18:35:10.605002] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.882 [2024-12-08 18:35:10.605009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.882 [2024-12-08 18:35:10.605025] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.882 [2024-12-08 18:35:10.605074] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.882 [2024-12-08 18:35:10.605080] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.882 [2024-12-08 18:35:10.605084] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.882 [2024-12-08 18:35:10.605088] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.882 [2024-12-08 18:35:10.605099] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.882 [2024-12-08 18:35:10.605103] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.882 [2024-12-08 18:35:10.605107] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.882 [2024-12-08 18:35:10.605115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.882 [2024-12-08 18:35:10.605145] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.882 [2024-12-08 18:35:10.605190] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.882 [2024-12-08 18:35:10.605196] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.882 [2024-12-08 18:35:10.605200] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.882 [2024-12-08 18:35:10.605204] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.882 [2024-12-08 18:35:10.605214] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.882 [2024-12-08 18:35:10.605218] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.882 [2024-12-08 18:35:10.605222] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.882 [2024-12-08 18:35:10.605229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.882 [2024-12-08 18:35:10.605245] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.882 [2024-12-08 18:35:10.605287] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.882 [2024-12-08 18:35:10.605294] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.882 [2024-12-08 18:35:10.605297] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.882 [2024-12-08 18:35:10.605302] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.882 [2024-12-08 18:35:10.605311] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.882 [2024-12-08 18:35:10.605316] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.882 [2024-12-08 18:35:10.605320] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.882 [2024-12-08 18:35:10.605327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.882 [2024-12-08 18:35:10.605343] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.882 [2024-12-08 18:35:10.605386] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.882 [2024-12-08 18:35:10.605392] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.882 [2024-12-08 18:35:10.605396] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.882 [2024-12-08 18:35:10.605400] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.882 [2024-12-08 18:35:10.605410] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.882 [2024-12-08 18:35:10.605424] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.882 [2024-12-08 18:35:10.605430] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.882 [2024-12-08 18:35:10.605437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.882 [2024-12-08 18:35:10.605455] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.882 [2024-12-08 18:35:10.605510] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.882 [2024-12-08 18:35:10.605516] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.882 [2024-12-08 18:35:10.605520] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.882 [2024-12-08 18:35:10.605524] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.882 [2024-12-08 18:35:10.605534] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.882 [2024-12-08 18:35:10.605539] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.605542] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.883 [2024-12-08 18:35:10.605549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.883 [2024-12-08 18:35:10.605566] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.883 [2024-12-08 18:35:10.605615] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.883 [2024-12-08 18:35:10.605622] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.883 [2024-12-08 18:35:10.605625] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.605629] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.883 [2024-12-08 18:35:10.605639] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.605644] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.605647] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.883 [2024-12-08 18:35:10.605655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.883 [2024-12-08 18:35:10.605670] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.883 [2024-12-08 18:35:10.605713] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.883 [2024-12-08 18:35:10.605719] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.883 [2024-12-08 18:35:10.605723] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.605727] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.883 [2024-12-08 18:35:10.605737] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.605742] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.605745] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.883 [2024-12-08 18:35:10.605753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.883 [2024-12-08 18:35:10.605784] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.883 [2024-12-08 18:35:10.605827] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.883 [2024-12-08 18:35:10.605834] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.883 [2024-12-08 18:35:10.605838] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.605842] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.883 [2024-12-08 18:35:10.605852] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.605857] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.605861] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.883 [2024-12-08 18:35:10.605869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.883 [2024-12-08 18:35:10.605885] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.883 [2024-12-08 18:35:10.605931] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.883 [2024-12-08 18:35:10.605938] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.883 [2024-12-08 18:35:10.605942] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.605946] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.883 [2024-12-08 18:35:10.605956] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.605961] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.605965] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.883 [2024-12-08 18:35:10.605972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.883 [2024-12-08 18:35:10.605988] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.883 [2024-12-08 18:35:10.606032] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.883 [2024-12-08 18:35:10.606038] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.883 [2024-12-08 18:35:10.606043] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.606047] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.883 [2024-12-08 18:35:10.606058] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.606062] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.606066] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.883 [2024-12-08 18:35:10.606073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.883 [2024-12-08 18:35:10.606089] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.883 [2024-12-08 18:35:10.606163] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.883 [2024-12-08 18:35:10.606169] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.883 [2024-12-08 18:35:10.606173] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.606177] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.883 [2024-12-08 18:35:10.606187] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.606191] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.606195] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.883 [2024-12-08 18:35:10.606202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.883 [2024-12-08 18:35:10.606217] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.883 [2024-12-08 18:35:10.606268] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.883 [2024-12-08 18:35:10.606274] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.883 [2024-12-08 18:35:10.606278] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.606282] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.883 [2024-12-08 18:35:10.606292] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.606296] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.606300] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.883 [2024-12-08 18:35:10.606307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.883 [2024-12-08 18:35:10.606323] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.883 [2024-12-08 18:35:10.606368] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.883 [2024-12-08 18:35:10.606374] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.883 [2024-12-08 18:35:10.606378] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.606382] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.883 [2024-12-08 18:35:10.606392] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.606396] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.606400] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.883 [2024-12-08 18:35:10.606407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.883 [2024-12-08 18:35:10.606423] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.883 [2024-12-08 18:35:10.606481] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.883 [2024-12-08 18:35:10.606490] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.883 [2024-12-08 18:35:10.606493] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.606498] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.883 [2024-12-08 18:35:10.606508] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.606513] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.606516] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.883 [2024-12-08 18:35:10.606524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.883 [2024-12-08 18:35:10.606542] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.883 [2024-12-08 18:35:10.606587] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.883 [2024-12-08 18:35:10.606593] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.883 [2024-12-08 18:35:10.606597] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.606601] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.883 [2024-12-08 18:35:10.606611] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.606615] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.606619] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.883 [2024-12-08 18:35:10.606626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.883 [2024-12-08 18:35:10.606642] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.883 [2024-12-08 18:35:10.606693] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.883 [2024-12-08 18:35:10.606700] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.883 [2024-12-08 18:35:10.606703] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.606707] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.883 [2024-12-08 18:35:10.606717] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.606722] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.606726] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.883 [2024-12-08 18:35:10.606733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.883 [2024-12-08 18:35:10.606748] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.883 [2024-12-08 18:35:10.606794] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.883 [2024-12-08 18:35:10.606801] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.883 [2024-12-08 18:35:10.606804] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.883 [2024-12-08 18:35:10.606808] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.884 [2024-12-08 18:35:10.606818] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.606823] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.606826] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.884 [2024-12-08 18:35:10.606834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.884 [2024-12-08 18:35:10.606849] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.884 [2024-12-08 18:35:10.606902] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.884 [2024-12-08 18:35:10.606909] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.884 [2024-12-08 18:35:10.606912] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.606916] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.884 [2024-12-08 18:35:10.606927] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.606931] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.606935] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.884 [2024-12-08 18:35:10.606942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.884 [2024-12-08 18:35:10.606958] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.884 [2024-12-08 18:35:10.607000] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.884 [2024-12-08 18:35:10.607006] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.884 [2024-12-08 18:35:10.607010] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.607014] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.884 [2024-12-08 18:35:10.607024] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.607028] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.607032] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.884 [2024-12-08 18:35:10.607039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.884 [2024-12-08 18:35:10.607054] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.884 [2024-12-08 18:35:10.607099] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.884 [2024-12-08 18:35:10.607110] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.884 [2024-12-08 18:35:10.607115] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.607119] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.884 [2024-12-08 18:35:10.607129] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.607134] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.607138] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.884 [2024-12-08 18:35:10.607145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.884 [2024-12-08 18:35:10.607162] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.884 [2024-12-08 18:35:10.607206] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.884 [2024-12-08 18:35:10.607219] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.884 [2024-12-08 18:35:10.607223] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.607227] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.884 [2024-12-08 18:35:10.607238] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.607242] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.607246] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.884 [2024-12-08 18:35:10.607254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.884 [2024-12-08 18:35:10.607271] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.884 [2024-12-08 18:35:10.607316] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.884 [2024-12-08 18:35:10.607322] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.884 [2024-12-08 18:35:10.607326] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.607330] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.884 [2024-12-08 18:35:10.607340] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.607344] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.607348] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.884 [2024-12-08 18:35:10.607355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.884 [2024-12-08 18:35:10.607371] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.884 [2024-12-08 18:35:10.607426] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.884 [2024-12-08 18:35:10.607435] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.884 [2024-12-08 18:35:10.607438] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.607442] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.884 [2024-12-08 18:35:10.607453] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.607458] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.607461] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.884 [2024-12-08 18:35:10.607468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.884 [2024-12-08 18:35:10.607486] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.884 [2024-12-08 18:35:10.607532] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.884 [2024-12-08 18:35:10.607538] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.884 [2024-12-08 18:35:10.607542] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.607546] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.884 [2024-12-08 18:35:10.607556] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.607561] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.607564] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.884 [2024-12-08 18:35:10.607571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.884 [2024-12-08 18:35:10.607587] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.884 [2024-12-08 18:35:10.607632] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.884 [2024-12-08 18:35:10.607638] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.884 [2024-12-08 18:35:10.607642] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.607646] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.884 [2024-12-08 18:35:10.607656] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.607660] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.607664] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.884 [2024-12-08 18:35:10.607671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.884 [2024-12-08 18:35:10.607687] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.884 [2024-12-08 18:35:10.607737] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.884 [2024-12-08 18:35:10.607743] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.884 [2024-12-08 18:35:10.607747] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.607751] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.884 [2024-12-08 18:35:10.607761] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.607766] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.607769] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.884 [2024-12-08 18:35:10.607776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.884 [2024-12-08 18:35:10.607792] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.884 [2024-12-08 18:35:10.607872] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.884 [2024-12-08 18:35:10.607880] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.884 [2024-12-08 18:35:10.607883] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.607888] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.884 [2024-12-08 18:35:10.607898] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.607903] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.607907] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.884 [2024-12-08 18:35:10.607914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.884 [2024-12-08 18:35:10.607932] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.884 [2024-12-08 18:35:10.607985] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.884 [2024-12-08 18:35:10.607992] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.884 [2024-12-08 18:35:10.607996] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.608000] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.884 [2024-12-08 18:35:10.608011] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.608015] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.884 [2024-12-08 18:35:10.608019] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.884 [2024-12-08 18:35:10.608027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.884 [2024-12-08 18:35:10.608043] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.884 [2024-12-08 18:35:10.608095] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.884 [2024-12-08 18:35:10.608102] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.885 [2024-12-08 18:35:10.608106] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.885 [2024-12-08 18:35:10.608110] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.885 [2024-12-08 18:35:10.608135] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.885 [2024-12-08 18:35:10.608139] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.885 [2024-12-08 18:35:10.608143] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.885 [2024-12-08 18:35:10.608150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.885 [2024-12-08 18:35:10.608166] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.885 [2024-12-08 18:35:10.608214] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.885 [2024-12-08 18:35:10.608220] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.885 [2024-12-08 18:35:10.608224] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.885 [2024-12-08 18:35:10.608228] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.885 [2024-12-08 18:35:10.608237] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.885 [2024-12-08 18:35:10.608242] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.885 [2024-12-08 18:35:10.608246] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.885 [2024-12-08 18:35:10.608253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.885 [2024-12-08 18:35:10.608268] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.885 [2024-12-08 18:35:10.608316] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.885 [2024-12-08 18:35:10.608322] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.885 [2024-12-08 18:35:10.608326] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.885 [2024-12-08 18:35:10.608330] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.885 [2024-12-08 18:35:10.608340] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.885 [2024-12-08 18:35:10.608344] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.885 [2024-12-08 18:35:10.608348] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.885 [2024-12-08 18:35:10.608355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.885 [2024-12-08 18:35:10.608371] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.885 [2024-12-08 18:35:10.608416] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.885 [2024-12-08 18:35:10.608423] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.885 [2024-12-08 18:35:10.608426] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.885 [2024-12-08 18:35:10.612456] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.885 [2024-12-08 18:35:10.612513] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.885 [2024-12-08 18:35:10.612520] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.885 [2024-12-08 18:35:10.612524] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1645ac0) 00:17:52.885 [2024-12-08 18:35:10.612533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.885 [2024-12-08 18:35:10.612560] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x167ec40, cid 3, qid 0 00:17:52.885 [2024-12-08 18:35:10.612614] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:52.885 [2024-12-08 18:35:10.612621] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:52.885 [2024-12-08 18:35:10.612625] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:52.885 [2024-12-08 18:35:10.612629] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x167ec40) on tqpair=0x1645ac0 00:17:52.885 [2024-12-08 18:35:10.612637] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 9 milliseconds 00:17:52.885 00:17:52.885 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:52.885 [2024-12-08 18:35:10.652544] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:52.885 [2024-12-08 18:35:10.652765] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88086 ] 00:17:52.885 [2024-12-08 18:35:10.791128] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:17:52.885 [2024-12-08 18:35:10.791183] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:52.885 [2024-12-08 18:35:10.791189] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:52.885 [2024-12-08 18:35:10.791199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:52.885 [2024-12-08 18:35:10.791209] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:52.885 [2024-12-08 18:35:10.791499] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:17:52.885 [2024-12-08 18:35:10.791574] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x124eac0 0 00:17:52.885 [2024-12-08 18:35:10.796433] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:52.885 [2024-12-08 18:35:10.796456] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:52.885 [2024-12-08 18:35:10.796470] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:52.885 [2024-12-08 18:35:10.796474] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:52.885 [2024-12-08 18:35:10.796506] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:52.885 [2024-12-08 18:35:10.796513] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:52.885 [2024-12-08 18:35:10.796517] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x124eac0) 00:17:52.885 [2024-12-08 18:35:10.796530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:52.885 [2024-12-08 18:35:10.796561] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12877c0, cid 0, qid 0 00:17:53.149 [2024-12-08 18:35:10.810420] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.149 [2024-12-08 18:35:10.810440] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.149 [2024-12-08 18:35:10.810445] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.149 [2024-12-08 18:35:10.810449] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12877c0) on tqpair=0x124eac0 00:17:53.149 [2024-12-08 18:35:10.810458] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:53.149 [2024-12-08 18:35:10.810466] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:17:53.149 [2024-12-08 18:35:10.810472] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:17:53.149 [2024-12-08 18:35:10.810485] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.149 [2024-12-08 18:35:10.810490] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.149 [2024-12-08 18:35:10.810494] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x124eac0) 00:17:53.149 [2024-12-08 18:35:10.810503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.149 [2024-12-08 18:35:10.810528] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12877c0, cid 0, qid 0 00:17:53.149 [2024-12-08 18:35:10.810584] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.149 [2024-12-08 18:35:10.810591] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.149 [2024-12-08 18:35:10.810594] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.149 [2024-12-08 18:35:10.810598] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12877c0) on tqpair=0x124eac0 00:17:53.149 [2024-12-08 18:35:10.810604] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:17:53.149 [2024-12-08 18:35:10.810611] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:17:53.149 [2024-12-08 18:35:10.810619] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.149 [2024-12-08 18:35:10.810623] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.149 [2024-12-08 18:35:10.810626] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x124eac0) 00:17:53.149 [2024-12-08 18:35:10.810634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.149 [2024-12-08 18:35:10.810651] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12877c0, cid 0, qid 0 00:17:53.149 [2024-12-08 18:35:10.810692] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.149 [2024-12-08 18:35:10.810699] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.149 [2024-12-08 18:35:10.810703] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.149 [2024-12-08 18:35:10.810706] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12877c0) on tqpair=0x124eac0 00:17:53.149 [2024-12-08 18:35:10.810728] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:17:53.149 [2024-12-08 18:35:10.810736] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:17:53.149 [2024-12-08 18:35:10.810744] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.149 [2024-12-08 18:35:10.810748] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.149 [2024-12-08 18:35:10.810751] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x124eac0) 00:17:53.149 [2024-12-08 18:35:10.810759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.149 [2024-12-08 18:35:10.810776] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12877c0, cid 0, qid 0 00:17:53.149 [2024-12-08 18:35:10.811180] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.149 [2024-12-08 18:35:10.811194] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.149 [2024-12-08 18:35:10.811199] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.149 [2024-12-08 18:35:10.811203] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12877c0) on tqpair=0x124eac0 00:17:53.149 [2024-12-08 18:35:10.811208] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:53.149 [2024-12-08 18:35:10.811220] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.149 [2024-12-08 18:35:10.811225] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.149 [2024-12-08 18:35:10.811229] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x124eac0) 00:17:53.149 [2024-12-08 18:35:10.811236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.149 [2024-12-08 18:35:10.811256] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12877c0, cid 0, qid 0 00:17:53.149 [2024-12-08 18:35:10.811333] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.149 [2024-12-08 18:35:10.811340] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.149 [2024-12-08 18:35:10.811344] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.149 [2024-12-08 18:35:10.811348] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12877c0) on tqpair=0x124eac0 00:17:53.149 [2024-12-08 18:35:10.811353] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:17:53.149 [2024-12-08 18:35:10.811358] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:17:53.149 [2024-12-08 18:35:10.811366] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:53.149 [2024-12-08 18:35:10.811472] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:17:53.149 [2024-12-08 18:35:10.811494] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:53.149 [2024-12-08 18:35:10.811504] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.149 [2024-12-08 18:35:10.811509] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.149 [2024-12-08 18:35:10.811513] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x124eac0) 00:17:53.150 [2024-12-08 18:35:10.811521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.150 [2024-12-08 18:35:10.811542] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12877c0, cid 0, qid 0 00:17:53.150 [2024-12-08 18:35:10.812007] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.150 [2024-12-08 18:35:10.812016] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.150 [2024-12-08 18:35:10.812019] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.150 [2024-12-08 18:35:10.812024] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12877c0) on tqpair=0x124eac0 00:17:53.150 [2024-12-08 18:35:10.812029] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:53.150 [2024-12-08 18:35:10.812040] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.150 [2024-12-08 18:35:10.812044] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.150 [2024-12-08 18:35:10.812048] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x124eac0) 00:17:53.150 [2024-12-08 18:35:10.812056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.150 [2024-12-08 18:35:10.812075] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12877c0, cid 0, qid 0 00:17:53.150 [2024-12-08 18:35:10.812142] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.150 [2024-12-08 18:35:10.812148] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.150 [2024-12-08 18:35:10.812152] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.150 [2024-12-08 18:35:10.812156] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12877c0) on tqpair=0x124eac0 00:17:53.150 [2024-12-08 18:35:10.812161] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:53.150 [2024-12-08 18:35:10.812166] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:17:53.150 [2024-12-08 18:35:10.812173] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:17:53.150 [2024-12-08 18:35:10.812188] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:17:53.150 [2024-12-08 18:35:10.812197] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.150 [2024-12-08 18:35:10.812202] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x124eac0) 00:17:53.150 [2024-12-08 18:35:10.812209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.150 [2024-12-08 18:35:10.812228] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12877c0, cid 0, qid 0 00:17:53.150 [2024-12-08 18:35:10.812317] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:53.150 [2024-12-08 18:35:10.812324] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:53.150 [2024-12-08 18:35:10.812328] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:53.150 [2024-12-08 18:35:10.812332] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x124eac0): datao=0, datal=4096, cccid=0 00:17:53.150 [2024-12-08 18:35:10.812337] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12877c0) on tqpair(0x124eac0): expected_datao=0, payload_size=4096 00:17:53.150 [2024-12-08 18:35:10.812341] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.150 [2024-12-08 18:35:10.812348] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:53.150 [2024-12-08 18:35:10.812353] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:53.150 [2024-12-08 18:35:10.812361] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.150 [2024-12-08 18:35:10.812367] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.150 [2024-12-08 18:35:10.812370] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.150 [2024-12-08 18:35:10.812374] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12877c0) on tqpair=0x124eac0 00:17:53.150 [2024-12-08 18:35:10.812382] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:17:53.150 [2024-12-08 18:35:10.812388] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:17:53.150 [2024-12-08 18:35:10.812393] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:17:53.150 [2024-12-08 18:35:10.812397] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:17:53.150 [2024-12-08 18:35:10.812401] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:17:53.150 [2024-12-08 18:35:10.812406] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:17:53.150 [2024-12-08 18:35:10.812414] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:17:53.150 [2024-12-08 18:35:10.812426] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.150 [2024-12-08 18:35:10.812430] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.150 [2024-12-08 18:35:10.812434] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x124eac0) 00:17:53.150 [2024-12-08 18:35:10.812442] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:53.150 [2024-12-08 18:35:10.812475] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12877c0, cid 0, qid 0 00:17:53.150 [2024-12-08 18:35:10.812527] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.150 [2024-12-08 18:35:10.812534] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.150 [2024-12-08 18:35:10.812537] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.150 [2024-12-08 18:35:10.812541] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12877c0) on tqpair=0x124eac0 00:17:53.150 [2024-12-08 18:35:10.812548] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.150 [2024-12-08 18:35:10.812552] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.150 [2024-12-08 18:35:10.812556] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x124eac0) 00:17:53.150 [2024-12-08 18:35:10.812562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.150 [2024-12-08 18:35:10.812568] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.150 [2024-12-08 18:35:10.812572] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.150 [2024-12-08 18:35:10.812576] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x124eac0) 00:17:53.150 [2024-12-08 18:35:10.812581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.150 [2024-12-08 18:35:10.812588] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.150 [2024-12-08 18:35:10.812591] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.150 [2024-12-08 18:35:10.812595] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x124eac0) 00:17:53.150 [2024-12-08 18:35:10.812600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.150 [2024-12-08 18:35:10.812606] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.150 [2024-12-08 18:35:10.812610] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.150 [2024-12-08 18:35:10.812614] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124eac0) 00:17:53.150 [2024-12-08 18:35:10.812619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.150 [2024-12-08 18:35:10.812624] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:53.150 [2024-12-08 18:35:10.812637] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:53.150 [2024-12-08 18:35:10.812645] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.150 [2024-12-08 18:35:10.812649] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x124eac0) 00:17:53.150 [2024-12-08 18:35:10.812655] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.150 [2024-12-08 18:35:10.812675] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12877c0, cid 0, qid 0 00:17:53.150 [2024-12-08 18:35:10.812682] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287940, cid 1, qid 0 00:17:53.150 [2024-12-08 18:35:10.812687] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287ac0, cid 2, qid 0 00:17:53.150 [2024-12-08 18:35:10.812691] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287c40, cid 3, qid 0 00:17:53.150 [2024-12-08 18:35:10.812696] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287dc0, cid 4, qid 0 00:17:53.150 [2024-12-08 18:35:10.812771] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.150 [2024-12-08 18:35:10.812777] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.150 [2024-12-08 18:35:10.812781] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.150 [2024-12-08 18:35:10.812785] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287dc0) on tqpair=0x124eac0 00:17:53.150 [2024-12-08 18:35:10.812790] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:17:53.150 [2024-12-08 18:35:10.812795] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:53.150 [2024-12-08 18:35:10.812807] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:17:53.150 [2024-12-08 18:35:10.812814] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:53.150 [2024-12-08 18:35:10.812820] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.150 [2024-12-08 18:35:10.812825] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.150 [2024-12-08 18:35:10.812828] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x124eac0) 00:17:53.150 [2024-12-08 18:35:10.812835] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:53.150 [2024-12-08 18:35:10.812853] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287dc0, cid 4, qid 0 00:17:53.150 [2024-12-08 18:35:10.812898] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.150 [2024-12-08 18:35:10.812904] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.150 [2024-12-08 18:35:10.812908] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.150 [2024-12-08 18:35:10.812912] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287dc0) on tqpair=0x124eac0 00:17:53.150 [2024-12-08 18:35:10.812971] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:17:53.150 [2024-12-08 18:35:10.812982] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:53.150 [2024-12-08 18:35:10.812990] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.151 [2024-12-08 18:35:10.812994] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x124eac0) 00:17:53.151 [2024-12-08 18:35:10.813001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.151 [2024-12-08 18:35:10.813020] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287dc0, cid 4, qid 0 00:17:53.151 [2024-12-08 18:35:10.813079] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:53.151 [2024-12-08 18:35:10.813086] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:53.151 [2024-12-08 18:35:10.813089] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:53.151 [2024-12-08 18:35:10.813093] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x124eac0): datao=0, datal=4096, cccid=4 00:17:53.151 [2024-12-08 18:35:10.813097] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1287dc0) on tqpair(0x124eac0): expected_datao=0, payload_size=4096 00:17:53.151 [2024-12-08 18:35:10.813102] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.151 [2024-12-08 18:35:10.813109] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:53.151 [2024-12-08 18:35:10.813113] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:53.151 [2024-12-08 18:35:10.813120] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.151 [2024-12-08 18:35:10.813126] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.151 [2024-12-08 18:35:10.813129] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.151 [2024-12-08 18:35:10.813133] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287dc0) on tqpair=0x124eac0 00:17:53.151 [2024-12-08 18:35:10.813149] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:17:53.151 [2024-12-08 18:35:10.813159] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:17:53.151 [2024-12-08 18:35:10.813169] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:17:53.151 [2024-12-08 18:35:10.813176] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.151 [2024-12-08 18:35:10.813180] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x124eac0) 00:17:53.151 [2024-12-08 18:35:10.813187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.151 [2024-12-08 18:35:10.813206] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287dc0, cid 4, qid 0 00:17:53.151 [2024-12-08 18:35:10.813276] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:53.151 [2024-12-08 18:35:10.813283] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:53.151 [2024-12-08 18:35:10.813286] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:53.151 [2024-12-08 18:35:10.813290] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x124eac0): datao=0, datal=4096, cccid=4 00:17:53.151 [2024-12-08 18:35:10.813294] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1287dc0) on tqpair(0x124eac0): expected_datao=0, payload_size=4096 00:17:53.151 [2024-12-08 18:35:10.813299] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.151 [2024-12-08 18:35:10.813305] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:53.151 [2024-12-08 18:35:10.813309] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:53.151 [2024-12-08 18:35:10.813317] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.151 [2024-12-08 18:35:10.813323] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.151 [2024-12-08 18:35:10.813326] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.151 [2024-12-08 18:35:10.813330] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287dc0) on tqpair=0x124eac0 00:17:53.151 [2024-12-08 18:35:10.813341] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:53.151 [2024-12-08 18:35:10.813350] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:53.151 [2024-12-08 18:35:10.813358] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.151 [2024-12-08 18:35:10.813363] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x124eac0) 00:17:53.151 [2024-12-08 18:35:10.813370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.151 [2024-12-08 18:35:10.813388] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287dc0, cid 4, qid 0 00:17:53.151 [2024-12-08 18:35:10.813456] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:53.151 [2024-12-08 18:35:10.813464] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:53.151 [2024-12-08 18:35:10.813467] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:53.151 [2024-12-08 18:35:10.813471] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x124eac0): datao=0, datal=4096, cccid=4 00:17:53.151 [2024-12-08 18:35:10.813475] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1287dc0) on tqpair(0x124eac0): expected_datao=0, payload_size=4096 00:17:53.151 [2024-12-08 18:35:10.813480] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.151 [2024-12-08 18:35:10.813486] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:53.151 [2024-12-08 18:35:10.813490] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:53.151 [2024-12-08 18:35:10.813498] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.151 [2024-12-08 18:35:10.813504] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.151 [2024-12-08 18:35:10.813507] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.151 [2024-12-08 18:35:10.813511] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287dc0) on tqpair=0x124eac0 00:17:53.151 [2024-12-08 18:35:10.813523] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:53.151 [2024-12-08 18:35:10.813532] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:17:53.151 [2024-12-08 18:35:10.813543] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:17:53.151 [2024-12-08 18:35:10.813550] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:53.151 [2024-12-08 18:35:10.813555] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:53.151 [2024-12-08 18:35:10.813560] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:17:53.151 [2024-12-08 18:35:10.813565] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:17:53.151 [2024-12-08 18:35:10.813569] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:17:53.151 [2024-12-08 18:35:10.813574] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:17:53.151 [2024-12-08 18:35:10.813589] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.151 [2024-12-08 18:35:10.813593] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x124eac0) 00:17:53.151 [2024-12-08 18:35:10.813600] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.151 [2024-12-08 18:35:10.813608] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.151 [2024-12-08 18:35:10.813611] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.151 [2024-12-08 18:35:10.813615] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x124eac0) 00:17:53.151 [2024-12-08 18:35:10.813621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.151 [2024-12-08 18:35:10.813646] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287dc0, cid 4, qid 0 00:17:53.151 [2024-12-08 18:35:10.813653] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287f40, cid 5, qid 0 00:17:53.151 [2024-12-08 18:35:10.813715] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.151 [2024-12-08 18:35:10.813722] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.151 [2024-12-08 18:35:10.813725] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.151 [2024-12-08 18:35:10.813729] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287dc0) on tqpair=0x124eac0 00:17:53.151 [2024-12-08 18:35:10.813736] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.151 [2024-12-08 18:35:10.813741] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.151 [2024-12-08 18:35:10.813745] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.151 [2024-12-08 18:35:10.813748] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287f40) on tqpair=0x124eac0 00:17:53.151 [2024-12-08 18:35:10.813758] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.151 [2024-12-08 18:35:10.813763] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x124eac0) 00:17:53.151 [2024-12-08 18:35:10.813770] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.151 [2024-12-08 18:35:10.813786] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287f40, cid 5, qid 0 00:17:53.151 [2024-12-08 18:35:10.813833] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.151 [2024-12-08 18:35:10.813840] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.151 [2024-12-08 18:35:10.813843] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.151 [2024-12-08 18:35:10.813847] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287f40) on tqpair=0x124eac0 00:17:53.151 [2024-12-08 18:35:10.813857] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.151 [2024-12-08 18:35:10.813861] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x124eac0) 00:17:53.151 [2024-12-08 18:35:10.813868] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.151 [2024-12-08 18:35:10.813883] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287f40, cid 5, qid 0 00:17:53.151 [2024-12-08 18:35:10.813926] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.151 [2024-12-08 18:35:10.813932] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.151 [2024-12-08 18:35:10.813936] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.151 [2024-12-08 18:35:10.813940] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287f40) on tqpair=0x124eac0 00:17:53.151 [2024-12-08 18:35:10.813949] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.151 [2024-12-08 18:35:10.813954] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x124eac0) 00:17:53.151 [2024-12-08 18:35:10.813960] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.152 [2024-12-08 18:35:10.813976] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287f40, cid 5, qid 0 00:17:53.152 [2024-12-08 18:35:10.814019] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.152 [2024-12-08 18:35:10.814025] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.152 [2024-12-08 18:35:10.814029] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.152 [2024-12-08 18:35:10.814033] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287f40) on tqpair=0x124eac0 00:17:53.152 [2024-12-08 18:35:10.814049] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.152 [2024-12-08 18:35:10.814054] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x124eac0) 00:17:53.152 [2024-12-08 18:35:10.814061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.152 [2024-12-08 18:35:10.814069] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.152 [2024-12-08 18:35:10.814073] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x124eac0) 00:17:53.152 [2024-12-08 18:35:10.814079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.152 [2024-12-08 18:35:10.814086] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.152 [2024-12-08 18:35:10.814090] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x124eac0) 00:17:53.152 [2024-12-08 18:35:10.814096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.152 [2024-12-08 18:35:10.814104] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.152 [2024-12-08 18:35:10.814107] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x124eac0) 00:17:53.152 [2024-12-08 18:35:10.814113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.152 [2024-12-08 18:35:10.814132] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287f40, cid 5, qid 0 00:17:53.152 [2024-12-08 18:35:10.814138] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287dc0, cid 4, qid 0 00:17:53.152 [2024-12-08 18:35:10.814143] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12880c0, cid 6, qid 0 00:17:53.152 [2024-12-08 18:35:10.814147] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1288240, cid 7, qid 0 00:17:53.152 [2024-12-08 18:35:10.814283] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:53.152 [2024-12-08 18:35:10.814290] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:53.152 [2024-12-08 18:35:10.814293] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:53.152 [2024-12-08 18:35:10.814297] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x124eac0): datao=0, datal=8192, cccid=5 00:17:53.152 [2024-12-08 18:35:10.814301] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1287f40) on tqpair(0x124eac0): expected_datao=0, payload_size=8192 00:17:53.152 [2024-12-08 18:35:10.814306] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.152 [2024-12-08 18:35:10.814320] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:53.152 [2024-12-08 18:35:10.814325] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:53.152 [2024-12-08 18:35:10.814331] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:53.152 [2024-12-08 18:35:10.814336] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:53.152 [2024-12-08 18:35:10.814340] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:53.152 [2024-12-08 18:35:10.814343] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x124eac0): datao=0, datal=512, cccid=4 00:17:53.152 [2024-12-08 18:35:10.814347] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1287dc0) on tqpair(0x124eac0): expected_datao=0, payload_size=512 00:17:53.152 [2024-12-08 18:35:10.814352] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.152 [2024-12-08 18:35:10.814357] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:53.152 [2024-12-08 18:35:10.814361] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:53.152 [2024-12-08 18:35:10.814366] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:53.152 [2024-12-08 18:35:10.814372] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:53.152 [2024-12-08 18:35:10.814375] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:53.152 [2024-12-08 18:35:10.814378] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x124eac0): datao=0, datal=512, cccid=6 00:17:53.152 [2024-12-08 18:35:10.814383] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12880c0) on tqpair(0x124eac0): expected_datao=0, payload_size=512 00:17:53.152 [2024-12-08 18:35:10.814388] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.152 [2024-12-08 18:35:10.814394] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:53.152 [2024-12-08 18:35:10.814397] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:53.152 [2024-12-08 18:35:10.818442] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:53.152 [2024-12-08 18:35:10.818459] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:53.152 [2024-12-08 18:35:10.818464] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:53.152 [2024-12-08 18:35:10.818468] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x124eac0): datao=0, datal=4096, cccid=7 00:17:53.152 [2024-12-08 18:35:10.818472] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1288240) on tqpair(0x124eac0): expected_datao=0, payload_size=4096 00:17:53.152 [2024-12-08 18:35:10.818477] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.152 ===================================================== 00:17:53.152 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:53.152 ===================================================== 00:17:53.152 Controller Capabilities/Features 00:17:53.152 ================================ 00:17:53.152 Vendor ID: 8086 00:17:53.152 Subsystem Vendor ID: 8086 00:17:53.152 Serial Number: SPDK00000000000001 00:17:53.152 Model Number: SPDK bdev Controller 00:17:53.152 Firmware Version: 24.09.1 00:17:53.152 Recommended Arb Burst: 6 00:17:53.152 IEEE OUI Identifier: e4 d2 5c 00:17:53.152 Multi-path I/O 00:17:53.152 May have multiple subsystem ports: Yes 00:17:53.152 May have multiple controllers: Yes 00:17:53.152 Associated with SR-IOV VF: No 00:17:53.152 Max Data Transfer Size: 131072 00:17:53.152 Max Number of Namespaces: 32 00:17:53.152 Max Number of I/O Queues: 127 00:17:53.152 NVMe Specification Version (VS): 1.3 00:17:53.152 NVMe Specification Version (Identify): 1.3 00:17:53.152 Maximum Queue Entries: 128 00:17:53.152 Contiguous Queues Required: Yes 00:17:53.152 Arbitration Mechanisms Supported 00:17:53.152 Weighted Round Robin: Not Supported 00:17:53.152 Vendor Specific: Not Supported 00:17:53.152 Reset Timeout: 15000 ms 00:17:53.152 Doorbell Stride: 4 bytes 00:17:53.152 NVM Subsystem Reset: Not Supported 00:17:53.152 Command Sets Supported 00:17:53.152 NVM Command Set: Supported 00:17:53.152 Boot Partition: Not Supported 00:17:53.152 Memory Page Size Minimum: 4096 bytes 00:17:53.152 Memory Page Size Maximum: 4096 bytes 00:17:53.152 Persistent Memory Region: Not Supported 00:17:53.152 Optional Asynchronous Events Supported 00:17:53.152 Namespace Attribute Notices: Supported 00:17:53.152 Firmware Activation Notices: Not Supported 00:17:53.152 ANA Change Notices: Not Supported 00:17:53.152 PLE Aggregate Log Change Notices: Not Supported 00:17:53.152 LBA Status Info Alert Notices: Not Supported 00:17:53.152 EGE Aggregate Log Change Notices: Not Supported 00:17:53.152 Normal NVM Subsystem Shutdown event: Not Supported 00:17:53.152 Zone Descriptor Change Notices: Not Supported 00:17:53.152 Discovery Log Change Notices: Not Supported 00:17:53.152 Controller Attributes 00:17:53.152 128-bit Host Identifier: Supported 00:17:53.152 Non-Operational Permissive Mode: Not Supported 00:17:53.152 NVM Sets: Not Supported 00:17:53.152 Read Recovery Levels: Not Supported 00:17:53.152 Endurance Groups: Not Supported 00:17:53.152 Predictable Latency Mode: Not Supported 00:17:53.152 Traffic Based Keep ALive: Not Supported 00:17:53.152 Namespace Granularity: Not Supported 00:17:53.152 SQ Associations: Not Supported 00:17:53.152 UUID List: Not Supported 00:17:53.152 Multi-Domain Subsystem: Not Supported 00:17:53.152 Fixed Capacity Management: Not Supported 00:17:53.152 Variable Capacity Management: Not Supported 00:17:53.152 Delete Endurance Group: Not Supported 00:17:53.152 Delete NVM Set: Not Supported 00:17:53.152 Extended LBA Formats Supported: Not Supported 00:17:53.152 Flexible Data Placement Supported: Not Supported 00:17:53.152 00:17:53.152 Controller Memory Buffer Support 00:17:53.152 ================================ 00:17:53.152 Supported: No 00:17:53.152 00:17:53.152 Persistent Memory Region Support 00:17:53.152 ================================ 00:17:53.152 Supported: No 00:17:53.152 00:17:53.152 Admin Command Set Attributes 00:17:53.152 ============================ 00:17:53.152 Security Send/Receive: Not Supported 00:17:53.152 Format NVM: Not Supported 00:17:53.152 Firmware Activate/Download: Not Supported 00:17:53.152 Namespace Management: Not Supported 00:17:53.152 Device Self-Test: Not Supported 00:17:53.152 Directives: Not Supported 00:17:53.152 NVMe-MI: Not Supported 00:17:53.152 Virtualization Management: Not Supported 00:17:53.152 Doorbell Buffer Config: Not Supported 00:17:53.152 Get LBA Status Capability: Not Supported 00:17:53.152 Command & Feature Lockdown Capability: Not Supported 00:17:53.152 Abort Command Limit: 4 00:17:53.152 Async Event Request Limit: 4 00:17:53.152 Number of Firmware Slots: N/A 00:17:53.152 Firmware Slot 1 Read-Only: N/A 00:17:53.152 Firmware Activation Without Reset: N/A 00:17:53.152 Multiple Update Detection Support: N/A 00:17:53.152 Firmware Update Granularity: No Information Provided 00:17:53.152 Per-Namespace SMART Log: No 00:17:53.152 Asymmetric Namespace Access Log Page: Not Supported 00:17:53.153 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:53.153 Command Effects Log Page: Supported 00:17:53.153 Get Log Page Extended Data: Supported 00:17:53.153 Telemetry Log Pages: Not Supported 00:17:53.153 Persistent Event Log Pages: Not Supported 00:17:53.153 Supported Log Pages Log Page: May Support 00:17:53.153 Commands Supported & Effects Log Page: Not Supported 00:17:53.153 Feature Identifiers & Effects Log Page:May Support 00:17:53.153 NVMe-MI Commands & Effects Log Page: May Support 00:17:53.153 Data Area 4 for Telemetry Log: Not Supported 00:17:53.153 Error Log Page Entries Supported: 128 00:17:53.153 Keep Alive: Supported 00:17:53.153 Keep Alive Granularity: 10000 ms 00:17:53.153 00:17:53.153 NVM Command Set Attributes 00:17:53.153 ========================== 00:17:53.153 Submission Queue Entry Size 00:17:53.153 Max: 64 00:17:53.153 Min: 64 00:17:53.153 Completion Queue Entry Size 00:17:53.153 Max: 16 00:17:53.153 Min: 16 00:17:53.153 Number of Namespaces: 32 00:17:53.153 Compare Command: Supported 00:17:53.153 Write Uncorrectable Command: Not Supported 00:17:53.153 Dataset Management Command: Supported 00:17:53.153 Write Zeroes Command: Supported 00:17:53.153 Set Features Save Field: Not Supported 00:17:53.153 Reservations: Supported 00:17:53.153 Timestamp: Not Supported 00:17:53.153 Copy: Supported 00:17:53.153 Volatile Write Cache: Present 00:17:53.153 Atomic Write Unit (Normal): 1 00:17:53.153 Atomic Write Unit (PFail): 1 00:17:53.153 Atomic Compare & Write Unit: 1 00:17:53.153 Fused Compare & Write: Supported 00:17:53.153 Scatter-Gather List 00:17:53.153 SGL Command Set: Supported 00:17:53.153 SGL Keyed: Supported 00:17:53.153 SGL Bit Bucket Descriptor: Not Supported 00:17:53.153 SGL Metadata Pointer: Not Supported 00:17:53.153 Oversized SGL: Not Supported 00:17:53.153 SGL Metadata Address: Not Supported 00:17:53.153 SGL Offset: Supported 00:17:53.153 Transport SGL Data Block: Not Supported 00:17:53.153 Replay Protected Memory Block: Not Supported 00:17:53.153 00:17:53.153 Firmware Slot Information 00:17:53.153 ========================= 00:17:53.153 Active slot: 1 00:17:53.153 Slot 1 Firmware Revision: 24.09.1 00:17:53.153 00:17:53.153 00:17:53.153 Commands Supported and Effects 00:17:53.153 ============================== 00:17:53.153 Admin Commands 00:17:53.153 -------------- 00:17:53.153 Get Log Page (02h): Supported 00:17:53.153 Identify (06h): Supported 00:17:53.153 Abort (08h): Supported 00:17:53.153 Set Features (09h): Supported 00:17:53.153 Get Features (0Ah): Supported 00:17:53.153 Asynchronous Event Request (0Ch): Supported 00:17:53.153 Keep Alive (18h): Supported 00:17:53.153 I/O Commands 00:17:53.153 ------------ 00:17:53.153 Flush (00h): Supported LBA-Change 00:17:53.153 Write (01h): Supported LBA-Change 00:17:53.153 Read (02h): Supported 00:17:53.153 Compare (05h): Supported 00:17:53.153 Write Zeroes (08h): Supported LBA-Change 00:17:53.153 Dataset Management (09h): Supported LBA-Change 00:17:53.153 Copy (19h): Supported LBA-Change 00:17:53.153 00:17:53.153 Error Log 00:17:53.153 ========= 00:17:53.153 00:17:53.153 Arbitration 00:17:53.153 =========== 00:17:53.153 Arbitration Burst: 1 00:17:53.153 00:17:53.153 Power Management 00:17:53.153 ================ 00:17:53.153 Number of Power States: 1 00:17:53.153 Current Power State: Power State #0 00:17:53.153 Power State #0: 00:17:53.153 Max Power: 0.00 W 00:17:53.153 Non-Operational State: Operational 00:17:53.153 Entry Latency: Not Reported 00:17:53.153 Exit Latency: Not Reported 00:17:53.153 Relative Read Throughput: 0 00:17:53.153 Relative Read Latency: 0 00:17:53.153 Relative Write Throughput: 0 00:17:53.153 Relative Write Latency: 0 00:17:53.153 Idle Power: Not Reported 00:17:53.153 Active Power: Not Reported 00:17:53.153 Non-Operational Permissive Mode: Not Supported 00:17:53.153 00:17:53.153 Health Information 00:17:53.153 ================== 00:17:53.153 Critical Warnings: 00:17:53.153 Available Spare Space: OK 00:17:53.153 Temperature: OK 00:17:53.153 Device Reliability: OK 00:17:53.153 Read Only: No 00:17:53.153 Volatile Memory Backup: OK 00:17:53.153 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:53.153 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:53.153 Available Spare: 0% 00:17:53.153 Available Spare Threshold: 0% 00:17:53.153 Life Percentage U[2024-12-08 18:35:10.818484] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:53.153 [2024-12-08 18:35:10.818488] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:53.153 [2024-12-08 18:35:10.818497] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.153 [2024-12-08 18:35:10.818503] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.153 [2024-12-08 18:35:10.818506] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.153 [2024-12-08 18:35:10.818510] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287f40) on tqpair=0x124eac0 00:17:53.153 [2024-12-08 18:35:10.818528] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.153 [2024-12-08 18:35:10.818550] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.153 [2024-12-08 18:35:10.818554] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.153 [2024-12-08 18:35:10.818574] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287dc0) on tqpair=0x124eac0 00:17:53.153 [2024-12-08 18:35:10.818586] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.153 [2024-12-08 18:35:10.818593] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.153 [2024-12-08 18:35:10.818596] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.153 [2024-12-08 18:35:10.818600] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12880c0) on tqpair=0x124eac0 00:17:53.153 [2024-12-08 18:35:10.818607] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.153 [2024-12-08 18:35:10.818613] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.153 [2024-12-08 18:35:10.818616] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.153 [2024-12-08 18:35:10.818620] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1288240) on tqpair=0x124eac0 00:17:53.153 [2024-12-08 18:35:10.818740] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.153 [2024-12-08 18:35:10.818748] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x124eac0) 00:17:53.153 [2024-12-08 18:35:10.818757] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.153 [2024-12-08 18:35:10.818786] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1288240, cid 7, qid 0 00:17:53.153 [2024-12-08 18:35:10.819341] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.153 [2024-12-08 18:35:10.819354] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.153 [2024-12-08 18:35:10.819359] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.153 [2024-12-08 18:35:10.819363] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1288240) on tqpair=0x124eac0 00:17:53.153 [2024-12-08 18:35:10.819414] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:17:53.153 [2024-12-08 18:35:10.819428] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12877c0) on tqpair=0x124eac0 00:17:53.153 [2024-12-08 18:35:10.819435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.153 [2024-12-08 18:35:10.819441] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287940) on tqpair=0x124eac0 00:17:53.153 [2024-12-08 18:35:10.819446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.153 [2024-12-08 18:35:10.819452] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287ac0) on tqpair=0x124eac0 00:17:53.153 [2024-12-08 18:35:10.819457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.153 [2024-12-08 18:35:10.819462] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287c40) on tqpair=0x124eac0 00:17:53.153 [2024-12-08 18:35:10.819467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.153 [2024-12-08 18:35:10.819477] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.153 [2024-12-08 18:35:10.819481] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.153 [2024-12-08 18:35:10.819485] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124eac0) 00:17:53.153 [2024-12-08 18:35:10.819494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.153 [2024-12-08 18:35:10.819518] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287c40, cid 3, qid 0 00:17:53.153 [2024-12-08 18:35:10.819997] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.153 [2024-12-08 18:35:10.820014] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.153 [2024-12-08 18:35:10.820018] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.153 [2024-12-08 18:35:10.820023] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287c40) on tqpair=0x124eac0 00:17:53.153 [2024-12-08 18:35:10.820031] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.153 [2024-12-08 18:35:10.820036] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.153 [2024-12-08 18:35:10.820040] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124eac0) 00:17:53.153 [2024-12-08 18:35:10.820048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.153 [2024-12-08 18:35:10.820076] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287c40, cid 3, qid 0 00:17:53.153 [2024-12-08 18:35:10.820166] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.153 [2024-12-08 18:35:10.820187] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.154 [2024-12-08 18:35:10.820191] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.820195] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287c40) on tqpair=0x124eac0 00:17:53.154 [2024-12-08 18:35:10.820200] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:17:53.154 [2024-12-08 18:35:10.820205] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:17:53.154 [2024-12-08 18:35:10.820215] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.820219] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.820223] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124eac0) 00:17:53.154 [2024-12-08 18:35:10.820230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.154 [2024-12-08 18:35:10.820248] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287c40, cid 3, qid 0 00:17:53.154 [2024-12-08 18:35:10.820300] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.154 [2024-12-08 18:35:10.820306] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.154 [2024-12-08 18:35:10.820310] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.820314] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287c40) on tqpair=0x124eac0 00:17:53.154 [2024-12-08 18:35:10.820325] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.820329] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.820333] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124eac0) 00:17:53.154 [2024-12-08 18:35:10.820340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.154 [2024-12-08 18:35:10.820357] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287c40, cid 3, qid 0 00:17:53.154 [2024-12-08 18:35:10.820403] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.154 [2024-12-08 18:35:10.820410] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.154 [2024-12-08 18:35:10.820413] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.820417] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287c40) on tqpair=0x124eac0 00:17:53.154 [2024-12-08 18:35:10.820427] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.820445] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.820450] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124eac0) 00:17:53.154 [2024-12-08 18:35:10.820457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.154 [2024-12-08 18:35:10.820478] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287c40, cid 3, qid 0 00:17:53.154 [2024-12-08 18:35:10.820531] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.154 [2024-12-08 18:35:10.820537] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.154 [2024-12-08 18:35:10.820541] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.820545] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287c40) on tqpair=0x124eac0 00:17:53.154 [2024-12-08 18:35:10.820555] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.820560] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.820564] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124eac0) 00:17:53.154 [2024-12-08 18:35:10.820571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.154 [2024-12-08 18:35:10.820587] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287c40, cid 3, qid 0 00:17:53.154 [2024-12-08 18:35:10.820633] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.154 [2024-12-08 18:35:10.820640] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.154 [2024-12-08 18:35:10.820643] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.820647] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287c40) on tqpair=0x124eac0 00:17:53.154 [2024-12-08 18:35:10.820657] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.820662] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.820666] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124eac0) 00:17:53.154 [2024-12-08 18:35:10.820673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.154 [2024-12-08 18:35:10.820689] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287c40, cid 3, qid 0 00:17:53.154 [2024-12-08 18:35:10.820735] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.154 [2024-12-08 18:35:10.820741] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.154 [2024-12-08 18:35:10.820745] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.820749] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287c40) on tqpair=0x124eac0 00:17:53.154 [2024-12-08 18:35:10.820759] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.820764] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.820767] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124eac0) 00:17:53.154 [2024-12-08 18:35:10.820774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.154 [2024-12-08 18:35:10.820791] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287c40, cid 3, qid 0 00:17:53.154 [2024-12-08 18:35:10.820837] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.154 [2024-12-08 18:35:10.820844] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.154 [2024-12-08 18:35:10.820847] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.820851] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287c40) on tqpair=0x124eac0 00:17:53.154 [2024-12-08 18:35:10.820861] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.820866] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.820869] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124eac0) 00:17:53.154 [2024-12-08 18:35:10.820876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.154 [2024-12-08 18:35:10.820894] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287c40, cid 3, qid 0 00:17:53.154 [2024-12-08 18:35:10.820938] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.154 [2024-12-08 18:35:10.820945] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.154 [2024-12-08 18:35:10.820948] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.820952] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287c40) on tqpair=0x124eac0 00:17:53.154 [2024-12-08 18:35:10.820962] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.820967] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.820971] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124eac0) 00:17:53.154 [2024-12-08 18:35:10.820978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.154 [2024-12-08 18:35:10.820994] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287c40, cid 3, qid 0 00:17:53.154 [2024-12-08 18:35:10.821040] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.154 [2024-12-08 18:35:10.821047] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.154 [2024-12-08 18:35:10.821050] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.821054] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287c40) on tqpair=0x124eac0 00:17:53.154 [2024-12-08 18:35:10.821064] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.821069] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.821072] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124eac0) 00:17:53.154 [2024-12-08 18:35:10.821079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.154 [2024-12-08 18:35:10.821096] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287c40, cid 3, qid 0 00:17:53.154 [2024-12-08 18:35:10.821142] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.154 [2024-12-08 18:35:10.821149] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.154 [2024-12-08 18:35:10.821153] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.821156] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287c40) on tqpair=0x124eac0 00:17:53.154 [2024-12-08 18:35:10.821166] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.154 [2024-12-08 18:35:10.821171] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.821175] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124eac0) 00:17:53.155 [2024-12-08 18:35:10.821182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.155 [2024-12-08 18:35:10.821198] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287c40, cid 3, qid 0 00:17:53.155 [2024-12-08 18:35:10.821239] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.155 [2024-12-08 18:35:10.821245] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.155 [2024-12-08 18:35:10.821249] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.821253] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287c40) on tqpair=0x124eac0 00:17:53.155 [2024-12-08 18:35:10.821263] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.821267] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.821271] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124eac0) 00:17:53.155 [2024-12-08 18:35:10.821278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.155 [2024-12-08 18:35:10.821296] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287c40, cid 3, qid 0 00:17:53.155 [2024-12-08 18:35:10.821339] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.155 [2024-12-08 18:35:10.821345] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.155 [2024-12-08 18:35:10.821349] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.821353] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287c40) on tqpair=0x124eac0 00:17:53.155 [2024-12-08 18:35:10.821363] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.821368] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.821371] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124eac0) 00:17:53.155 [2024-12-08 18:35:10.821378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.155 [2024-12-08 18:35:10.821395] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287c40, cid 3, qid 0 00:17:53.155 [2024-12-08 18:35:10.821453] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.155 [2024-12-08 18:35:10.821461] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.155 [2024-12-08 18:35:10.821465] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.821469] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287c40) on tqpair=0x124eac0 00:17:53.155 [2024-12-08 18:35:10.821479] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.821484] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.821487] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124eac0) 00:17:53.155 [2024-12-08 18:35:10.821494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.155 [2024-12-08 18:35:10.821513] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287c40, cid 3, qid 0 00:17:53.155 [2024-12-08 18:35:10.821562] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.155 [2024-12-08 18:35:10.821568] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.155 [2024-12-08 18:35:10.821572] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.821576] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287c40) on tqpair=0x124eac0 00:17:53.155 [2024-12-08 18:35:10.821586] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.821591] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.821594] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124eac0) 00:17:53.155 [2024-12-08 18:35:10.821601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.155 [2024-12-08 18:35:10.821618] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287c40, cid 3, qid 0 00:17:53.155 [2024-12-08 18:35:10.821658] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.155 [2024-12-08 18:35:10.821665] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.155 [2024-12-08 18:35:10.821668] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.821672] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287c40) on tqpair=0x124eac0 00:17:53.155 [2024-12-08 18:35:10.821682] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.821687] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.821690] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124eac0) 00:17:53.155 [2024-12-08 18:35:10.821697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.155 [2024-12-08 18:35:10.821715] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287c40, cid 3, qid 0 00:17:53.155 [2024-12-08 18:35:10.821755] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.155 [2024-12-08 18:35:10.821762] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.155 [2024-12-08 18:35:10.821765] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.821769] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287c40) on tqpair=0x124eac0 00:17:53.155 [2024-12-08 18:35:10.821779] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.821784] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.821788] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124eac0) 00:17:53.155 [2024-12-08 18:35:10.821795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.155 [2024-12-08 18:35:10.821811] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287c40, cid 3, qid 0 00:17:53.155 [2024-12-08 18:35:10.821858] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.155 [2024-12-08 18:35:10.821864] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.155 [2024-12-08 18:35:10.821868] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.821872] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287c40) on tqpair=0x124eac0 00:17:53.155 [2024-12-08 18:35:10.821882] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.821886] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.821890] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124eac0) 00:17:53.155 [2024-12-08 18:35:10.821897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.155 [2024-12-08 18:35:10.821913] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287c40, cid 3, qid 0 00:17:53.155 [2024-12-08 18:35:10.821953] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.155 [2024-12-08 18:35:10.821960] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.155 [2024-12-08 18:35:10.821963] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.821967] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287c40) on tqpair=0x124eac0 00:17:53.155 [2024-12-08 18:35:10.821977] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.821982] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.821985] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124eac0) 00:17:53.155 [2024-12-08 18:35:10.821992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.155 [2024-12-08 18:35:10.822009] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287c40, cid 3, qid 0 00:17:53.155 [2024-12-08 18:35:10.822051] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.155 [2024-12-08 18:35:10.822058] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.155 [2024-12-08 18:35:10.822062] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.822065] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287c40) on tqpair=0x124eac0 00:17:53.155 [2024-12-08 18:35:10.822075] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.822080] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.822084] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124eac0) 00:17:53.155 [2024-12-08 18:35:10.822091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.155 [2024-12-08 18:35:10.822108] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287c40, cid 3, qid 0 00:17:53.155 [2024-12-08 18:35:10.822151] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.155 [2024-12-08 18:35:10.822158] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.155 [2024-12-08 18:35:10.822161] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.822165] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287c40) on tqpair=0x124eac0 00:17:53.155 [2024-12-08 18:35:10.822175] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.822180] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.822184] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124eac0) 00:17:53.155 [2024-12-08 18:35:10.822191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.155 [2024-12-08 18:35:10.822207] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287c40, cid 3, qid 0 00:17:53.155 [2024-12-08 18:35:10.822250] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.155 [2024-12-08 18:35:10.822257] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.155 [2024-12-08 18:35:10.822260] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.822264] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287c40) on tqpair=0x124eac0 00:17:53.155 [2024-12-08 18:35:10.822274] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.822279] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.822283] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124eac0) 00:17:53.155 [2024-12-08 18:35:10.822289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.155 [2024-12-08 18:35:10.822306] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287c40, cid 3, qid 0 00:17:53.155 [2024-12-08 18:35:10.822355] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.155 [2024-12-08 18:35:10.822362] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.155 [2024-12-08 18:35:10.822365] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.155 [2024-12-08 18:35:10.822369] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287c40) on tqpair=0x124eac0 00:17:53.156 [2024-12-08 18:35:10.822379] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:53.156 [2024-12-08 18:35:10.822384] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:53.156 [2024-12-08 18:35:10.822387] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x124eac0) 00:17:53.156 [2024-12-08 18:35:10.822394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.156 [2024-12-08 18:35:10.826435] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1287c40, cid 3, qid 0 00:17:53.156 [2024-12-08 18:35:10.826490] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:53.156 [2024-12-08 18:35:10.826499] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:53.156 [2024-12-08 18:35:10.826502] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:53.156 [2024-12-08 18:35:10.826507] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1287c40) on tqpair=0x124eac0 00:17:53.156 [2024-12-08 18:35:10.826516] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:17:53.156 sed: 0% 00:17:53.156 Data Units Read: 0 00:17:53.156 Data Units Written: 0 00:17:53.156 Host Read Commands: 0 00:17:53.156 Host Write Commands: 0 00:17:53.156 Controller Busy Time: 0 minutes 00:17:53.156 Power Cycles: 0 00:17:53.156 Power On Hours: 0 hours 00:17:53.156 Unsafe Shutdowns: 0 00:17:53.156 Unrecoverable Media Errors: 0 00:17:53.156 Lifetime Error Log Entries: 0 00:17:53.156 Warning Temperature Time: 0 minutes 00:17:53.156 Critical Temperature Time: 0 minutes 00:17:53.156 00:17:53.156 Number of Queues 00:17:53.156 ================ 00:17:53.156 Number of I/O Submission Queues: 127 00:17:53.156 Number of I/O Completion Queues: 127 00:17:53.156 00:17:53.156 Active Namespaces 00:17:53.156 ================= 00:17:53.156 Namespace ID:1 00:17:53.156 Error Recovery Timeout: Unlimited 00:17:53.156 Command Set Identifier: NVM (00h) 00:17:53.156 Deallocate: Supported 00:17:53.156 Deallocated/Unwritten Error: Not Supported 00:17:53.156 Deallocated Read Value: Unknown 00:17:53.156 Deallocate in Write Zeroes: Not Supported 00:17:53.156 Deallocated Guard Field: 0xFFFF 00:17:53.156 Flush: Supported 00:17:53.156 Reservation: Supported 00:17:53.156 Namespace Sharing Capabilities: Multiple Controllers 00:17:53.156 Size (in LBAs): 131072 (0GiB) 00:17:53.156 Capacity (in LBAs): 131072 (0GiB) 00:17:53.156 Utilization (in LBAs): 131072 (0GiB) 00:17:53.156 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:53.156 EUI64: ABCDEF0123456789 00:17:53.156 UUID: f8c12de1-d31a-43e8-b665-08c5d77fbb91 00:17:53.156 Thin Provisioning: Not Supported 00:17:53.156 Per-NS Atomic Units: Yes 00:17:53.156 Atomic Boundary Size (Normal): 0 00:17:53.156 Atomic Boundary Size (PFail): 0 00:17:53.156 Atomic Boundary Offset: 0 00:17:53.156 Maximum Single Source Range Length: 65535 00:17:53.156 Maximum Copy Length: 65535 00:17:53.156 Maximum Source Range Count: 1 00:17:53.156 NGUID/EUI64 Never Reused: No 00:17:53.156 Namespace Write Protected: No 00:17:53.156 Number of LBA Formats: 1 00:17:53.156 Current LBA Format: LBA Format #00 00:17:53.156 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:53.156 00:17:53.156 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:53.156 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:53.156 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.156 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:53.156 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.156 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:53.156 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:53.156 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:53.156 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:17:53.156 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:53.156 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:17:53.156 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:53.156 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:53.156 rmmod nvme_tcp 00:17:53.156 rmmod nvme_fabrics 00:17:53.156 rmmod nvme_keyring 00:17:53.156 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:53.156 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:17:53.156 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:17:53.156 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 88051 ']' 00:17:53.156 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 88051 00:17:53.156 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 88051 ']' 00:17:53.156 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 88051 00:17:53.156 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:17:53.156 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:53.156 18:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88051 00:17:53.156 killing process with pid 88051 00:17:53.156 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:53.156 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:53.156 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88051' 00:17:53.156 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 88051 00:17:53.156 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 88051 00:17:53.416 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:53.416 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:53.416 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:53.416 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:17:53.416 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:17:53.416 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:53.416 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:17:53.416 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:53.416 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:53.416 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:53.416 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:53.416 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:53.416 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:53.416 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:53.416 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:53.416 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:53.416 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:53.417 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:53.676 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:53.676 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:53.676 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:53.676 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:53.676 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:53.676 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.676 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:53.676 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.676 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:17:53.676 00:17:53.676 real 0m2.248s 00:17:53.676 user 0m4.432s 00:17:53.676 sys 0m0.773s 00:17:53.676 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:53.676 ************************************ 00:17:53.676 END TEST nvmf_identify 00:17:53.676 ************************************ 00:17:53.676 18:35:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:53.676 18:35:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:53.676 18:35:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:53.676 18:35:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:53.676 18:35:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.676 ************************************ 00:17:53.676 START TEST nvmf_perf 00:17:53.676 ************************************ 00:17:53.676 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:53.937 * Looking for test storage... 00:17:53.937 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:53.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.937 --rc genhtml_branch_coverage=1 00:17:53.937 --rc genhtml_function_coverage=1 00:17:53.937 --rc genhtml_legend=1 00:17:53.937 --rc geninfo_all_blocks=1 00:17:53.937 --rc geninfo_unexecuted_blocks=1 00:17:53.937 00:17:53.937 ' 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:53.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.937 --rc genhtml_branch_coverage=1 00:17:53.937 --rc genhtml_function_coverage=1 00:17:53.937 --rc genhtml_legend=1 00:17:53.937 --rc geninfo_all_blocks=1 00:17:53.937 --rc geninfo_unexecuted_blocks=1 00:17:53.937 00:17:53.937 ' 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:53.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.937 --rc genhtml_branch_coverage=1 00:17:53.937 --rc genhtml_function_coverage=1 00:17:53.937 --rc genhtml_legend=1 00:17:53.937 --rc geninfo_all_blocks=1 00:17:53.937 --rc geninfo_unexecuted_blocks=1 00:17:53.937 00:17:53.937 ' 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:53.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.937 --rc genhtml_branch_coverage=1 00:17:53.937 --rc genhtml_function_coverage=1 00:17:53.937 --rc genhtml_legend=1 00:17:53.937 --rc geninfo_all_blocks=1 00:17:53.937 --rc geninfo_unexecuted_blocks=1 00:17:53.937 00:17:53.937 ' 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.937 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:53.938 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:53.938 Cannot find device "nvmf_init_br" 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:53.938 Cannot find device "nvmf_init_br2" 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:53.938 Cannot find device "nvmf_tgt_br" 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:53.938 Cannot find device "nvmf_tgt_br2" 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:53.938 Cannot find device "nvmf_init_br" 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:53.938 Cannot find device "nvmf_init_br2" 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:17:53.938 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:54.198 Cannot find device "nvmf_tgt_br" 00:17:54.198 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:17:54.199 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:54.199 Cannot find device "nvmf_tgt_br2" 00:17:54.199 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:17:54.199 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:54.199 Cannot find device "nvmf_br" 00:17:54.199 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:17:54.199 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:54.199 Cannot find device "nvmf_init_if" 00:17:54.199 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:17:54.199 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:54.199 Cannot find device "nvmf_init_if2" 00:17:54.199 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:17:54.199 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:54.199 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:54.199 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:17:54.199 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:54.199 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:54.199 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:17:54.199 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:54.199 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:54.199 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:54.199 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:54.199 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:54.199 18:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:54.199 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:54.199 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:54.199 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:54.199 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:54.199 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:54.199 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:54.199 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:54.199 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:54.199 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:54.199 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:54.199 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:54.199 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:54.199 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:54.199 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:54.199 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:54.199 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:54.199 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:54.459 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:54.459 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:17:54.459 00:17:54.459 --- 10.0.0.3 ping statistics --- 00:17:54.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.459 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:54.459 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:54.459 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.078 ms 00:17:54.459 00:17:54.459 --- 10.0.0.4 ping statistics --- 00:17:54.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.459 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:54.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:54.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:54.459 00:17:54.459 --- 10.0.0.1 ping statistics --- 00:17:54.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.459 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:54.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:54.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:17:54.459 00:17:54.459 --- 10.0.0.2 ping statistics --- 00:17:54.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.459 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # return 0 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=88301 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 88301 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 88301 ']' 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:54.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:54.459 18:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:54.459 [2024-12-08 18:35:12.300066] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:54.459 [2024-12-08 18:35:12.300188] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.719 [2024-12-08 18:35:12.441856] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:54.719 [2024-12-08 18:35:12.521656] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.719 [2024-12-08 18:35:12.521910] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.719 [2024-12-08 18:35:12.522084] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:54.719 [2024-12-08 18:35:12.522233] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:54.719 [2024-12-08 18:35:12.522276] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.719 [2024-12-08 18:35:12.522575] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.719 [2024-12-08 18:35:12.522663] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.719 [2024-12-08 18:35:12.524441] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:54.719 [2024-12-08 18:35:12.524479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.719 [2024-12-08 18:35:12.580856] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:55.658 18:35:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:55.658 18:35:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:17:55.658 18:35:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:55.658 18:35:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:55.658 18:35:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:55.658 18:35:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:55.658 18:35:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:55.658 18:35:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:56.226 18:35:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:56.226 18:35:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:56.485 18:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:56.485 18:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:56.744 18:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:56.744 18:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:17:56.744 18:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:56.744 18:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:56.744 18:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:57.003 [2024-12-08 18:35:14.776622] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.003 18:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:57.263 18:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:57.263 18:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:57.521 18:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:57.521 18:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:57.778 18:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:58.036 [2024-12-08 18:35:15.782827] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:58.036 18:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:58.294 18:35:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:58.294 18:35:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:58.294 18:35:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:58.294 18:35:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:59.229 Initializing NVMe Controllers 00:17:59.229 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:59.229 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:59.229 Initialization complete. Launching workers. 00:17:59.229 ======================================================== 00:17:59.229 Latency(us) 00:17:59.229 Device Information : IOPS MiB/s Average min max 00:17:59.229 PCIE (0000:00:10.0) NSID 1 from core 0: 24617.84 96.16 1300.03 240.56 7688.47 00:17:59.229 ======================================================== 00:17:59.229 Total : 24617.84 96.16 1300.03 240.56 7688.47 00:17:59.229 00:17:59.488 18:35:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:00.863 Initializing NVMe Controllers 00:18:00.863 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:00.863 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:00.863 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:00.863 Initialization complete. Launching workers. 00:18:00.863 ======================================================== 00:18:00.863 Latency(us) 00:18:00.863 Device Information : IOPS MiB/s Average min max 00:18:00.863 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3247.53 12.69 307.55 102.49 4251.93 00:18:00.863 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.75 0.49 8071.33 7007.40 12027.99 00:18:00.863 ======================================================== 00:18:00.863 Total : 3372.29 13.17 594.76 102.49 12027.99 00:18:00.863 00:18:00.863 18:35:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:02.249 Initializing NVMe Controllers 00:18:02.249 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:02.249 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:02.249 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:02.249 Initialization complete. Launching workers. 00:18:02.249 ======================================================== 00:18:02.249 Latency(us) 00:18:02.249 Device Information : IOPS MiB/s Average min max 00:18:02.249 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9145.15 35.72 3498.80 537.69 7663.29 00:18:02.249 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3979.96 15.55 8052.30 5350.23 13197.56 00:18:02.249 ======================================================== 00:18:02.249 Total : 13125.12 51.27 4879.57 537.69 13197.56 00:18:02.249 00:18:02.249 18:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:18:02.249 18:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:04.778 Initializing NVMe Controllers 00:18:04.778 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:04.778 Controller IO queue size 128, less than required. 00:18:04.778 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:04.778 Controller IO queue size 128, less than required. 00:18:04.778 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:04.778 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:04.778 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:04.778 Initialization complete. Launching workers. 00:18:04.778 ======================================================== 00:18:04.778 Latency(us) 00:18:04.778 Device Information : IOPS MiB/s Average min max 00:18:04.778 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1642.62 410.66 78592.96 35728.88 138115.13 00:18:04.778 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 635.77 158.94 205459.70 60807.91 328878.93 00:18:04.778 ======================================================== 00:18:04.778 Total : 2278.40 569.60 113994.40 35728.88 328878.93 00:18:04.778 00:18:04.778 18:35:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:18:04.778 Initializing NVMe Controllers 00:18:04.778 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:04.778 Controller IO queue size 128, less than required. 00:18:04.778 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:04.778 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:18:04.778 Controller IO queue size 128, less than required. 00:18:04.778 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:04.778 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:18:04.778 WARNING: Some requested NVMe devices were skipped 00:18:04.778 No valid NVMe controllers or AIO or URING devices found 00:18:04.778 18:35:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:18:07.340 Initializing NVMe Controllers 00:18:07.341 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:07.341 Controller IO queue size 128, less than required. 00:18:07.341 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:07.341 Controller IO queue size 128, less than required. 00:18:07.341 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:07.341 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:07.341 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:07.341 Initialization complete. Launching workers. 00:18:07.341 00:18:07.341 ==================== 00:18:07.341 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:07.341 TCP transport: 00:18:07.341 polls: 9290 00:18:07.341 idle_polls: 5958 00:18:07.341 sock_completions: 3332 00:18:07.341 nvme_completions: 5761 00:18:07.341 submitted_requests: 8686 00:18:07.341 queued_requests: 1 00:18:07.341 00:18:07.341 ==================== 00:18:07.341 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:07.341 TCP transport: 00:18:07.341 polls: 9887 00:18:07.341 idle_polls: 5202 00:18:07.341 sock_completions: 4685 00:18:07.341 nvme_completions: 6553 00:18:07.341 submitted_requests: 9782 00:18:07.341 queued_requests: 1 00:18:07.341 ======================================================== 00:18:07.341 Latency(us) 00:18:07.341 Device Information : IOPS MiB/s Average min max 00:18:07.341 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1439.89 359.97 90872.97 47172.68 154489.54 00:18:07.341 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1637.88 409.47 78966.64 42145.59 112164.76 00:18:07.341 ======================================================== 00:18:07.341 Total : 3077.77 769.44 84536.85 42145.59 154489.54 00:18:07.341 00:18:07.341 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:18:07.341 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:07.599 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:18:07.599 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:18:07.599 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:18:07.856 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=a33f2134-7685-4205-94b7-86c432e2202a 00:18:07.856 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb a33f2134-7685-4205-94b7-86c432e2202a 00:18:07.857 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=a33f2134-7685-4205-94b7-86c432e2202a 00:18:07.857 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:07.857 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:18:07.857 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:18:07.857 18:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:08.115 18:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:08.115 { 00:18:08.115 "uuid": "a33f2134-7685-4205-94b7-86c432e2202a", 00:18:08.115 "name": "lvs_0", 00:18:08.115 "base_bdev": "Nvme0n1", 00:18:08.115 "total_data_clusters": 1278, 00:18:08.115 "free_clusters": 1278, 00:18:08.115 "block_size": 4096, 00:18:08.115 "cluster_size": 4194304 00:18:08.115 } 00:18:08.115 ]' 00:18:08.373 18:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="a33f2134-7685-4205-94b7-86c432e2202a") .free_clusters' 00:18:08.373 18:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1278 00:18:08.373 18:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="a33f2134-7685-4205-94b7-86c432e2202a") .cluster_size' 00:18:08.373 5112 00:18:08.373 18:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:18:08.373 18:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5112 00:18:08.373 18:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5112 00:18:08.373 18:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:18:08.373 18:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a33f2134-7685-4205-94b7-86c432e2202a lbd_0 5112 00:18:08.631 18:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=6d6beeac-d451-409d-8b43-890dd5176bda 00:18:08.631 18:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 6d6beeac-d451-409d-8b43-890dd5176bda lvs_n_0 00:18:08.889 18:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=9113569c-09fd-4a17-a8fb-1e2d532c68cc 00:18:08.889 18:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 9113569c-09fd-4a17-a8fb-1e2d532c68cc 00:18:08.889 18:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=9113569c-09fd-4a17-a8fb-1e2d532c68cc 00:18:08.890 18:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:08.890 18:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:18:08.890 18:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:18:08.890 18:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:09.148 18:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:09.148 { 00:18:09.148 "uuid": "a33f2134-7685-4205-94b7-86c432e2202a", 00:18:09.148 "name": "lvs_0", 00:18:09.148 "base_bdev": "Nvme0n1", 00:18:09.148 "total_data_clusters": 1278, 00:18:09.148 "free_clusters": 0, 00:18:09.148 "block_size": 4096, 00:18:09.148 "cluster_size": 4194304 00:18:09.148 }, 00:18:09.148 { 00:18:09.148 "uuid": "9113569c-09fd-4a17-a8fb-1e2d532c68cc", 00:18:09.148 "name": "lvs_n_0", 00:18:09.148 "base_bdev": "6d6beeac-d451-409d-8b43-890dd5176bda", 00:18:09.148 "total_data_clusters": 1276, 00:18:09.148 "free_clusters": 1276, 00:18:09.148 "block_size": 4096, 00:18:09.148 "cluster_size": 4194304 00:18:09.148 } 00:18:09.148 ]' 00:18:09.148 18:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="9113569c-09fd-4a17-a8fb-1e2d532c68cc") .free_clusters' 00:18:09.406 18:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1276 00:18:09.406 18:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="9113569c-09fd-4a17-a8fb-1e2d532c68cc") .cluster_size' 00:18:09.406 5104 00:18:09.406 18:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:18:09.406 18:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5104 00:18:09.406 18:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5104 00:18:09.406 18:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:18:09.406 18:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9113569c-09fd-4a17-a8fb-1e2d532c68cc lbd_nest_0 5104 00:18:09.665 18:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=5e9878fc-3c46-4cef-a8fe-8232629dff95 00:18:09.665 18:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:09.923 18:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:18:09.923 18:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 5e9878fc-3c46-4cef-a8fe-8232629dff95 00:18:10.182 18:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:10.440 18:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:18:10.440 18:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:18:10.440 18:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:10.440 18:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:10.440 18:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:10.698 Initializing NVMe Controllers 00:18:10.698 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:10.698 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:10.698 WARNING: Some requested NVMe devices were skipped 00:18:10.698 No valid NVMe controllers or AIO or URING devices found 00:18:10.698 18:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:10.698 18:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:22.910 Initializing NVMe Controllers 00:18:22.910 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:22.910 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:22.910 Initialization complete. Launching workers. 00:18:22.910 ======================================================== 00:18:22.910 Latency(us) 00:18:22.910 Device Information : IOPS MiB/s Average min max 00:18:22.910 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 912.82 114.10 1095.02 330.47 8367.06 00:18:22.910 ======================================================== 00:18:22.910 Total : 912.82 114.10 1095.02 330.47 8367.06 00:18:22.910 00:18:22.910 18:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:22.910 18:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:22.910 18:35:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:22.910 Initializing NVMe Controllers 00:18:22.910 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:22.910 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:22.910 WARNING: Some requested NVMe devices were skipped 00:18:22.910 No valid NVMe controllers or AIO or URING devices found 00:18:22.910 18:35:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:22.910 18:35:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:32.895 Initializing NVMe Controllers 00:18:32.895 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:32.895 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:32.895 Initialization complete. Launching workers. 00:18:32.895 ======================================================== 00:18:32.895 Latency(us) 00:18:32.895 Device Information : IOPS MiB/s Average min max 00:18:32.895 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1321.77 165.22 24219.44 6415.50 68015.82 00:18:32.895 ======================================================== 00:18:32.895 Total : 1321.77 165.22 24219.44 6415.50 68015.82 00:18:32.895 00:18:32.895 18:35:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:32.895 18:35:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:32.895 18:35:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:32.895 Initializing NVMe Controllers 00:18:32.895 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:32.895 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:32.895 WARNING: Some requested NVMe devices were skipped 00:18:32.895 No valid NVMe controllers or AIO or URING devices found 00:18:32.895 18:35:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:32.895 18:35:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:42.903 Initializing NVMe Controllers 00:18:42.903 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:42.903 Controller IO queue size 128, less than required. 00:18:42.903 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:42.903 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:42.903 Initialization complete. Launching workers. 00:18:42.903 ======================================================== 00:18:42.903 Latency(us) 00:18:42.903 Device Information : IOPS MiB/s Average min max 00:18:42.903 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4201.07 525.13 30460.98 13716.87 63056.95 00:18:42.903 ======================================================== 00:18:42.903 Total : 4201.07 525.13 30460.98 13716.87 63056.95 00:18:42.903 00:18:42.903 18:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:42.903 18:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5e9878fc-3c46-4cef-a8fe-8232629dff95 00:18:42.903 18:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:18:43.163 18:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6d6beeac-d451-409d-8b43-890dd5176bda 00:18:43.423 18:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:18:43.684 18:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:43.684 18:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:18:43.684 18:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:43.684 18:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:18:43.684 18:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:43.684 18:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:18:43.684 18:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:43.684 18:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:43.684 rmmod nvme_tcp 00:18:43.684 rmmod nvme_fabrics 00:18:43.684 rmmod nvme_keyring 00:18:43.684 18:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:43.684 18:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:18:43.684 18:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:18:43.684 18:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 88301 ']' 00:18:43.684 18:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 88301 00:18:43.684 18:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 88301 ']' 00:18:43.684 18:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 88301 00:18:43.684 18:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:18:43.684 18:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:43.684 18:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88301 00:18:43.943 killing process with pid 88301 00:18:43.943 18:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:43.943 18:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:43.943 18:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88301' 00:18:43.943 18:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 88301 00:18:43.943 18:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 88301 00:18:45.324 18:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:45.324 18:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:45.324 18:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:45.324 18:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:18:45.324 18:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:45.324 18:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:18:45.324 18:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:18:45.324 18:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:45.324 18:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:45.324 18:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:45.324 18:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:45.324 18:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:45.324 18:36:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:45.324 18:36:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:45.324 18:36:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:45.324 18:36:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:45.324 18:36:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:45.324 18:36:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:45.324 18:36:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:45.324 18:36:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:45.324 18:36:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:45.324 18:36:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:45.324 18:36:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:45.324 18:36:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.324 18:36:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:45.324 18:36:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.324 18:36:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:18:45.324 00:18:45.324 real 0m51.637s 00:18:45.324 user 3m15.017s 00:18:45.324 sys 0m11.576s 00:18:45.324 18:36:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:45.324 18:36:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:45.324 ************************************ 00:18:45.324 END TEST nvmf_perf 00:18:45.324 ************************************ 00:18:45.324 18:36:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:45.324 18:36:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:45.324 18:36:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:45.324 18:36:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.324 ************************************ 00:18:45.324 START TEST nvmf_fio_host 00:18:45.324 ************************************ 00:18:45.324 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:45.585 * Looking for test storage... 00:18:45.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:45.585 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:45.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.586 --rc genhtml_branch_coverage=1 00:18:45.586 --rc genhtml_function_coverage=1 00:18:45.586 --rc genhtml_legend=1 00:18:45.586 --rc geninfo_all_blocks=1 00:18:45.586 --rc geninfo_unexecuted_blocks=1 00:18:45.586 00:18:45.586 ' 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:45.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.586 --rc genhtml_branch_coverage=1 00:18:45.586 --rc genhtml_function_coverage=1 00:18:45.586 --rc genhtml_legend=1 00:18:45.586 --rc geninfo_all_blocks=1 00:18:45.586 --rc geninfo_unexecuted_blocks=1 00:18:45.586 00:18:45.586 ' 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:45.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.586 --rc genhtml_branch_coverage=1 00:18:45.586 --rc genhtml_function_coverage=1 00:18:45.586 --rc genhtml_legend=1 00:18:45.586 --rc geninfo_all_blocks=1 00:18:45.586 --rc geninfo_unexecuted_blocks=1 00:18:45.586 00:18:45.586 ' 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:45.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.586 --rc genhtml_branch_coverage=1 00:18:45.586 --rc genhtml_function_coverage=1 00:18:45.586 --rc genhtml_legend=1 00:18:45.586 --rc geninfo_all_blocks=1 00:18:45.586 --rc geninfo_unexecuted_blocks=1 00:18:45.586 00:18:45.586 ' 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.586 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:45.587 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:45.587 Cannot find device "nvmf_init_br" 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:45.587 Cannot find device "nvmf_init_br2" 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:45.587 Cannot find device "nvmf_tgt_br" 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:45.587 Cannot find device "nvmf_tgt_br2" 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:45.587 Cannot find device "nvmf_init_br" 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:18:45.587 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:45.848 Cannot find device "nvmf_init_br2" 00:18:45.848 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:18:45.848 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:45.848 Cannot find device "nvmf_tgt_br" 00:18:45.848 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:18:45.848 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:45.848 Cannot find device "nvmf_tgt_br2" 00:18:45.848 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:18:45.848 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:45.848 Cannot find device "nvmf_br" 00:18:45.848 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:18:45.848 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:45.848 Cannot find device "nvmf_init_if" 00:18:45.848 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:18:45.848 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:45.848 Cannot find device "nvmf_init_if2" 00:18:45.848 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:18:45.848 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:45.849 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:45.849 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:45.849 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:46.109 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:46.109 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:46.109 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:18:46.109 00:18:46.109 --- 10.0.0.3 ping statistics --- 00:18:46.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.109 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:18:46.109 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:46.109 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:46.109 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:18:46.109 00:18:46.109 --- 10.0.0.4 ping statistics --- 00:18:46.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.109 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:18:46.109 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:46.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:46.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:18:46.109 00:18:46.109 --- 10.0.0.1 ping statistics --- 00:18:46.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.109 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:18:46.109 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:46.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:46.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:18:46.109 00:18:46.109 --- 10.0.0.2 ping statistics --- 00:18:46.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.109 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:18:46.109 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:46.109 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # return 0 00:18:46.109 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:46.109 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:46.109 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:46.109 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:46.109 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:46.109 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:46.109 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:46.109 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:18:46.109 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:18:46.110 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:46.110 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.110 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=89177 00:18:46.110 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:46.110 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:46.110 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 89177 00:18:46.110 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 89177 ']' 00:18:46.110 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.110 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:46.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.110 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.110 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:46.110 18:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.110 [2024-12-08 18:36:03.885150] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:46.110 [2024-12-08 18:36:03.885246] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.110 [2024-12-08 18:36:04.028354] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:46.369 [2024-12-08 18:36:04.125880] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.369 [2024-12-08 18:36:04.125965] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.369 [2024-12-08 18:36:04.125980] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.369 [2024-12-08 18:36:04.125990] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.369 [2024-12-08 18:36:04.126000] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.369 [2024-12-08 18:36:04.127161] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.369 [2024-12-08 18:36:04.127354] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.369 [2024-12-08 18:36:04.127485] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:46.369 [2024-12-08 18:36:04.127498] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.369 [2024-12-08 18:36:04.203691] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:47.306 18:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:47.306 18:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:18:47.306 18:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:47.306 [2024-12-08 18:36:05.120202] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.306 18:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:18:47.306 18:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:47.306 18:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.306 18:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:47.565 Malloc1 00:18:47.565 18:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:47.824 18:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:48.082 18:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:48.341 [2024-12-08 18:36:06.153945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:48.341 18:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:48.600 18:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:18:48.600 18:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:48.601 18:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:48.601 18:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:48.601 18:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:48.601 18:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:48.601 18:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:48.601 18:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:48.601 18:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:48.601 18:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:48.601 18:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:48.601 18:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:48.601 18:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:48.601 18:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:48.601 18:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:48.601 18:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:48.601 18:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:48.601 18:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:48.601 18:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:48.601 18:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:48.601 18:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:48.601 18:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:48.601 18:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:48.860 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:48.860 fio-3.35 00:18:48.860 Starting 1 thread 00:18:51.397 00:18:51.397 test: (groupid=0, jobs=1): err= 0: pid=89260: Sun Dec 8 18:36:08 2024 00:18:51.397 read: IOPS=9445, BW=36.9MiB/s (38.7MB/s)(74.0MiB/2006msec) 00:18:51.397 slat (nsec): min=1654, max=323223, avg=2258.65, stdev=3207.11 00:18:51.397 clat (usec): min=2606, max=13710, avg=7055.77, stdev=689.82 00:18:51.397 lat (usec): min=2645, max=13713, avg=7058.02, stdev=689.75 00:18:51.397 clat percentiles (usec): 00:18:51.397 | 1.00th=[ 5866], 5.00th=[ 6259], 10.00th=[ 6456], 20.00th=[ 6652], 00:18:51.397 | 30.00th=[ 6783], 40.00th=[ 6915], 50.00th=[ 6980], 60.00th=[ 7111], 00:18:51.397 | 70.00th=[ 7242], 80.00th=[ 7373], 90.00th=[ 7701], 95.00th=[ 7898], 00:18:51.397 | 99.00th=[ 8848], 99.50th=[11863], 99.90th=[13173], 99.95th=[13566], 00:18:51.397 | 99.99th=[13698] 00:18:51.397 bw ( KiB/s): min=35968, max=38896, per=99.97%, avg=37770.00, stdev=1320.06, samples=4 00:18:51.397 iops : min= 8992, max= 9724, avg=9442.00, stdev=330.09, samples=4 00:18:51.397 write: IOPS=9446, BW=36.9MiB/s (38.7MB/s)(74.0MiB/2006msec); 0 zone resets 00:18:51.397 slat (nsec): min=1742, max=271890, avg=2370.18, stdev=2521.07 00:18:51.397 clat (usec): min=2462, max=13113, avg=6431.52, stdev=652.01 00:18:51.397 lat (usec): min=2477, max=13115, avg=6433.89, stdev=652.01 00:18:51.397 clat percentiles (usec): 00:18:51.397 | 1.00th=[ 5342], 5.00th=[ 5735], 10.00th=[ 5866], 20.00th=[ 6063], 00:18:51.397 | 30.00th=[ 6194], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6521], 00:18:51.397 | 70.00th=[ 6587], 80.00th=[ 6718], 90.00th=[ 6980], 95.00th=[ 7242], 00:18:51.397 | 99.00th=[ 8094], 99.50th=[11076], 99.90th=[12256], 99.95th=[12649], 00:18:51.397 | 99.99th=[13173] 00:18:51.397 bw ( KiB/s): min=36792, max=38984, per=99.96%, avg=37772.00, stdev=1139.37, samples=4 00:18:51.397 iops : min= 9198, max= 9746, avg=9443.00, stdev=284.84, samples=4 00:18:51.397 lat (msec) : 4=0.11%, 10=99.13%, 20=0.77% 00:18:51.397 cpu : usr=71.57%, sys=21.50%, ctx=17, majf=0, minf=6 00:18:51.397 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:51.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:51.397 issued rwts: total=18948,18950,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.397 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:51.397 00:18:51.397 Run status group 0 (all jobs): 00:18:51.397 READ: bw=36.9MiB/s (38.7MB/s), 36.9MiB/s-36.9MiB/s (38.7MB/s-38.7MB/s), io=74.0MiB (77.6MB), run=2006-2006msec 00:18:51.397 WRITE: bw=36.9MiB/s (38.7MB/s), 36.9MiB/s-36.9MiB/s (38.7MB/s-38.7MB/s), io=74.0MiB (77.6MB), run=2006-2006msec 00:18:51.397 18:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:51.397 18:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:51.397 18:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:51.397 18:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:51.397 18:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:51.397 18:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:51.397 18:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:51.397 18:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:51.397 18:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:51.397 18:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:51.397 18:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:51.397 18:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:51.397 18:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:51.397 18:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:51.397 18:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:51.397 18:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:51.398 18:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:51.398 18:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:51.398 18:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:51.398 18:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:51.398 18:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:51.398 18:36:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:51.398 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:51.398 fio-3.35 00:18:51.398 Starting 1 thread 00:18:53.935 00:18:53.935 test: (groupid=0, jobs=1): err= 0: pid=89303: Sun Dec 8 18:36:11 2024 00:18:53.935 read: IOPS=8771, BW=137MiB/s (144MB/s)(275MiB/2006msec) 00:18:53.935 slat (usec): min=2, max=121, avg= 3.26, stdev= 2.04 00:18:53.935 clat (usec): min=1629, max=18353, avg=8279.84, stdev=2252.67 00:18:53.935 lat (usec): min=1632, max=18356, avg=8283.10, stdev=2252.74 00:18:53.935 clat percentiles (usec): 00:18:53.935 | 1.00th=[ 3884], 5.00th=[ 4752], 10.00th=[ 5407], 20.00th=[ 6390], 00:18:53.935 | 30.00th=[ 6980], 40.00th=[ 7570], 50.00th=[ 8094], 60.00th=[ 8848], 00:18:53.935 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[11207], 95.00th=[12256], 00:18:53.935 | 99.00th=[14091], 99.50th=[14877], 99.90th=[15533], 99.95th=[15795], 00:18:53.935 | 99.99th=[16909] 00:18:53.935 bw ( KiB/s): min=67072, max=74784, per=50.35%, avg=70656.00, stdev=3438.28, samples=4 00:18:53.935 iops : min= 4192, max= 4674, avg=4416.00, stdev=214.89, samples=4 00:18:53.935 write: IOPS=5035, BW=78.7MiB/s (82.5MB/s)(144MiB/1824msec); 0 zone resets 00:18:53.935 slat (usec): min=29, max=364, avg=34.23, stdev= 9.59 00:18:53.935 clat (usec): min=4253, max=21417, avg=11136.94, stdev=2380.15 00:18:53.935 lat (usec): min=4307, max=21449, avg=11171.17, stdev=2382.03 00:18:53.935 clat percentiles (usec): 00:18:53.935 | 1.00th=[ 6587], 5.00th=[ 7701], 10.00th=[ 8356], 20.00th=[ 8979], 00:18:53.935 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10814], 60.00th=[11469], 00:18:53.935 | 70.00th=[12387], 80.00th=[13435], 90.00th=[14353], 95.00th=[15401], 00:18:53.935 | 99.00th=[16909], 99.50th=[17433], 99.90th=[19006], 99.95th=[19006], 00:18:53.935 | 99.99th=[21365] 00:18:53.935 bw ( KiB/s): min=69952, max=77888, per=91.20%, avg=73472.00, stdev=3636.19, samples=4 00:18:53.935 iops : min= 4372, max= 4868, avg=4592.00, stdev=227.26, samples=4 00:18:53.935 lat (msec) : 2=0.01%, 4=0.89%, 10=63.77%, 20=35.32%, 50=0.01% 00:18:53.935 cpu : usr=81.11%, sys=15.15%, ctx=5, majf=0, minf=2 00:18:53.935 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:53.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:53.935 issued rwts: total=17595,9184,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.935 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:53.935 00:18:53.935 Run status group 0 (all jobs): 00:18:53.935 READ: bw=137MiB/s (144MB/s), 137MiB/s-137MiB/s (144MB/s-144MB/s), io=275MiB (288MB), run=2006-2006msec 00:18:53.935 WRITE: bw=78.7MiB/s (82.5MB/s), 78.7MiB/s-78.7MiB/s (82.5MB/s-82.5MB/s), io=144MiB (150MB), run=1824-1824msec 00:18:53.935 18:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:53.935 18:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:18:53.935 18:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:18:53.935 18:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:18:53.935 18:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:18:53.935 18:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:18:53.936 18:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:18:53.936 18:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:53.936 18:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:18:53.936 18:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:18:53.936 18:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:18:53.936 18:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:18:54.196 Nvme0n1 00:18:54.196 18:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:18:54.765 18:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=a5a041ed-e091-4948-a1ea-7037aafe765d 00:18:54.765 18:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb a5a041ed-e091-4948-a1ea-7037aafe765d 00:18:54.765 18:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=a5a041ed-e091-4948-a1ea-7037aafe765d 00:18:54.765 18:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:54.765 18:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:18:54.765 18:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:18:54.765 18:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:54.765 18:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:54.765 { 00:18:54.765 "uuid": "a5a041ed-e091-4948-a1ea-7037aafe765d", 00:18:54.766 "name": "lvs_0", 00:18:54.766 "base_bdev": "Nvme0n1", 00:18:54.766 "total_data_clusters": 4, 00:18:54.766 "free_clusters": 4, 00:18:54.766 "block_size": 4096, 00:18:54.766 "cluster_size": 1073741824 00:18:54.766 } 00:18:54.766 ]' 00:18:54.766 18:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="a5a041ed-e091-4948-a1ea-7037aafe765d") .free_clusters' 00:18:54.766 18:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=4 00:18:54.766 18:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="a5a041ed-e091-4948-a1ea-7037aafe765d") .cluster_size' 00:18:55.025 4096 00:18:55.025 18:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:18:55.025 18:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4096 00:18:55.025 18:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4096 00:18:55.025 18:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:18:55.284 6a19c325-608f-4e5d-96d8-88b20aedc5fc 00:18:55.284 18:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:18:55.544 18:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:18:55.544 18:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:18:55.804 18:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:55.804 18:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:55.804 18:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:55.804 18:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:55.804 18:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:55.804 18:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:55.804 18:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:55.804 18:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:55.804 18:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:55.804 18:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:55.804 18:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:55.804 18:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:55.804 18:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:55.804 18:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:55.804 18:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:55.804 18:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:55.804 18:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:55.804 18:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:55.804 18:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:55.804 18:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:55.804 18:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:55.804 18:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:56.064 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:56.064 fio-3.35 00:18:56.064 Starting 1 thread 00:18:58.598 00:18:58.598 test: (groupid=0, jobs=1): err= 0: pid=89413: Sun Dec 8 18:36:16 2024 00:18:58.598 read: IOPS=6039, BW=23.6MiB/s (24.7MB/s)(47.4MiB/2009msec) 00:18:58.598 slat (nsec): min=1764, max=318206, avg=2781.73, stdev=4366.33 00:18:58.598 clat (usec): min=3148, max=19322, avg=11076.77, stdev=946.72 00:18:58.598 lat (usec): min=3157, max=19325, avg=11079.55, stdev=946.41 00:18:58.598 clat percentiles (usec): 00:18:58.598 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10290], 00:18:58.598 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11338], 00:18:58.598 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12518], 00:18:58.598 | 99.00th=[13304], 99.50th=[13829], 99.90th=[16581], 99.95th=[17695], 00:18:58.598 | 99.99th=[19268] 00:18:58.598 bw ( KiB/s): min=23096, max=24968, per=99.85%, avg=24124.00, stdev=786.08, samples=4 00:18:58.598 iops : min= 5774, max= 6242, avg=6031.00, stdev=196.52, samples=4 00:18:58.598 write: IOPS=6020, BW=23.5MiB/s (24.7MB/s)(47.2MiB/2009msec); 0 zone resets 00:18:58.598 slat (nsec): min=1894, max=255110, avg=2953.53, stdev=3304.42 00:18:58.598 clat (usec): min=2521, max=18034, avg=10052.17, stdev=889.87 00:18:58.598 lat (usec): min=2535, max=18037, avg=10055.12, stdev=889.75 00:18:58.598 clat percentiles (usec): 00:18:58.598 | 1.00th=[ 8225], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9372], 00:18:58.598 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:18:58.598 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11076], 95.00th=[11338], 00:18:58.598 | 99.00th=[11994], 99.50th=[12518], 99.90th=[16319], 99.95th=[17695], 00:18:58.598 | 99.99th=[17957] 00:18:58.598 bw ( KiB/s): min=23776, max=24416, per=99.99%, avg=24082.00, stdev=266.32, samples=4 00:18:58.598 iops : min= 5944, max= 6104, avg=6020.50, stdev=66.58, samples=4 00:18:58.598 lat (msec) : 4=0.05%, 10=28.88%, 20=71.06% 00:18:58.598 cpu : usr=73.80%, sys=20.47%, ctx=4, majf=0, minf=6 00:18:58.598 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:18:58.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:58.598 issued rwts: total=12134,12096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.598 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:58.598 00:18:58.598 Run status group 0 (all jobs): 00:18:58.598 READ: bw=23.6MiB/s (24.7MB/s), 23.6MiB/s-23.6MiB/s (24.7MB/s-24.7MB/s), io=47.4MiB (49.7MB), run=2009-2009msec 00:18:58.598 WRITE: bw=23.5MiB/s (24.7MB/s), 23.5MiB/s-23.5MiB/s (24.7MB/s-24.7MB/s), io=47.2MiB (49.5MB), run=2009-2009msec 00:18:58.598 18:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:58.598 18:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:18:58.857 18:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=f3b72a8c-de85-4faa-9eea-dfe6fcf68639 00:18:58.857 18:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb f3b72a8c-de85-4faa-9eea-dfe6fcf68639 00:18:58.857 18:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=f3b72a8c-de85-4faa-9eea-dfe6fcf68639 00:18:58.857 18:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:58.857 18:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:18:58.857 18:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:18:58.857 18:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:59.116 18:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:59.116 { 00:18:59.116 "uuid": "a5a041ed-e091-4948-a1ea-7037aafe765d", 00:18:59.116 "name": "lvs_0", 00:18:59.116 "base_bdev": "Nvme0n1", 00:18:59.116 "total_data_clusters": 4, 00:18:59.116 "free_clusters": 0, 00:18:59.116 "block_size": 4096, 00:18:59.116 "cluster_size": 1073741824 00:18:59.116 }, 00:18:59.116 { 00:18:59.116 "uuid": "f3b72a8c-de85-4faa-9eea-dfe6fcf68639", 00:18:59.116 "name": "lvs_n_0", 00:18:59.116 "base_bdev": "6a19c325-608f-4e5d-96d8-88b20aedc5fc", 00:18:59.116 "total_data_clusters": 1022, 00:18:59.116 "free_clusters": 1022, 00:18:59.116 "block_size": 4096, 00:18:59.116 "cluster_size": 4194304 00:18:59.116 } 00:18:59.116 ]' 00:18:59.116 18:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="f3b72a8c-de85-4faa-9eea-dfe6fcf68639") .free_clusters' 00:18:59.116 18:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1022 00:18:59.116 18:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="f3b72a8c-de85-4faa-9eea-dfe6fcf68639") .cluster_size' 00:18:59.116 4088 00:18:59.116 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:18:59.116 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4088 00:18:59.116 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4088 00:18:59.116 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:18:59.374 47f78438-3095-413b-96e5-1e0aa4ec8596 00:18:59.374 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:18:59.715 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:18:59.972 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:19:00.231 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:00.231 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:00.231 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:00.231 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:00.231 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:00.231 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:00.231 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:19:00.231 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:00.231 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:00.231 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:00.231 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:00.231 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:19:00.231 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:00.231 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:00.231 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:00.231 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:00.231 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:00.231 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:00.231 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:00.231 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:00.231 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:00.231 18:36:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:00.231 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:00.231 fio-3.35 00:19:00.231 Starting 1 thread 00:19:02.762 00:19:02.762 test: (groupid=0, jobs=1): err= 0: pid=89490: Sun Dec 8 18:36:20 2024 00:19:02.762 read: IOPS=6257, BW=24.4MiB/s (25.6MB/s)(49.1MiB/2009msec) 00:19:02.762 slat (nsec): min=1911, max=297493, avg=3021.09, stdev=4513.18 00:19:02.762 clat (usec): min=3166, max=20057, avg=10669.59, stdev=970.24 00:19:02.762 lat (usec): min=3175, max=20060, avg=10672.62, stdev=969.96 00:19:02.762 clat percentiles (usec): 00:19:02.762 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[ 9896], 00:19:02.762 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:19:02.762 | 70.00th=[11076], 80.00th=[11469], 90.00th=[11863], 95.00th=[12125], 00:19:02.762 | 99.00th=[12911], 99.50th=[13173], 99.90th=[18220], 99.95th=[19268], 00:19:02.762 | 99.99th=[19268] 00:19:02.762 bw ( KiB/s): min=24192, max=25542, per=99.67%, avg=24947.50, stdev=688.31, samples=4 00:19:02.762 iops : min= 6048, max= 6385, avg=6236.75, stdev=171.93, samples=4 00:19:02.762 write: IOPS=6240, BW=24.4MiB/s (25.6MB/s)(49.0MiB/2009msec); 0 zone resets 00:19:02.762 slat (nsec): min=1932, max=302004, avg=3167.47, stdev=4019.63 00:19:02.762 clat (usec): min=2437, max=18047, avg=9686.54, stdev=888.04 00:19:02.762 lat (usec): min=2450, max=18049, avg=9689.71, stdev=887.88 00:19:02.762 clat percentiles (usec): 00:19:02.762 | 1.00th=[ 7898], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 8979], 00:19:02.762 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:19:02.762 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10683], 95.00th=[11076], 00:19:02.762 | 99.00th=[11731], 99.50th=[12125], 99.90th=[16188], 99.95th=[17695], 00:19:02.762 | 99.99th=[17957] 00:19:02.762 bw ( KiB/s): min=24592, max=25224, per=99.91%, avg=24940.50, stdev=272.97, samples=4 00:19:02.762 iops : min= 6148, max= 6306, avg=6235.00, stdev=68.28, samples=4 00:19:02.762 lat (msec) : 4=0.06%, 10=44.45%, 20=55.49%, 50=0.01% 00:19:02.762 cpu : usr=73.01%, sys=20.77%, ctx=4, majf=0, minf=6 00:19:02.762 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:19:02.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:02.762 issued rwts: total=12571,12538,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.762 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:02.762 00:19:02.762 Run status group 0 (all jobs): 00:19:02.762 READ: bw=24.4MiB/s (25.6MB/s), 24.4MiB/s-24.4MiB/s (25.6MB/s-25.6MB/s), io=49.1MiB (51.5MB), run=2009-2009msec 00:19:02.762 WRITE: bw=24.4MiB/s (25.6MB/s), 24.4MiB/s-24.4MiB/s (25.6MB/s-25.6MB/s), io=49.0MiB (51.4MB), run=2009-2009msec 00:19:02.762 18:36:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:02.762 18:36:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:19:03.021 18:36:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:19:03.281 18:36:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:19:03.541 18:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:19:03.541 18:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:19:03.800 18:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:19:04.738 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:19:04.738 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:19:04.738 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:19:04.738 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:04.738 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:19:04.738 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:04.738 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:19:04.738 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:04.738 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:04.738 rmmod nvme_tcp 00:19:04.738 rmmod nvme_fabrics 00:19:04.738 rmmod nvme_keyring 00:19:04.738 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:04.738 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:19:04.738 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:19:04.738 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 89177 ']' 00:19:04.738 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 89177 00:19:04.738 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 89177 ']' 00:19:04.738 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 89177 00:19:04.738 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:19:04.738 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:04.738 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89177 00:19:04.738 killing process with pid 89177 00:19:04.738 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:04.738 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:04.738 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89177' 00:19:04.738 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 89177 00:19:04.738 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 89177 00:19:04.997 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:04.997 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:04.997 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:04.997 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:19:04.997 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:19:04.997 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:19:04.997 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:04.997 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:04.997 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:04.997 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:04.997 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:04.997 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:04.997 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:04.997 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:04.998 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:04.998 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:04.998 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:04.998 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:05.257 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:05.257 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:05.257 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:05.257 18:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:05.257 18:36:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:05.257 18:36:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.257 18:36:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:05.257 18:36:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.257 18:36:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:19:05.257 00:19:05.257 real 0m19.805s 00:19:05.257 user 1m25.864s 00:19:05.257 sys 0m4.351s 00:19:05.257 18:36:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:05.257 18:36:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.257 ************************************ 00:19:05.257 END TEST nvmf_fio_host 00:19:05.257 ************************************ 00:19:05.257 18:36:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:05.257 18:36:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:05.257 18:36:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:05.257 18:36:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.257 ************************************ 00:19:05.257 START TEST nvmf_failover 00:19:05.257 ************************************ 00:19:05.257 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:05.518 * Looking for test storage... 00:19:05.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:05.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.518 --rc genhtml_branch_coverage=1 00:19:05.518 --rc genhtml_function_coverage=1 00:19:05.518 --rc genhtml_legend=1 00:19:05.518 --rc geninfo_all_blocks=1 00:19:05.518 --rc geninfo_unexecuted_blocks=1 00:19:05.518 00:19:05.518 ' 00:19:05.518 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:05.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.519 --rc genhtml_branch_coverage=1 00:19:05.519 --rc genhtml_function_coverage=1 00:19:05.519 --rc genhtml_legend=1 00:19:05.519 --rc geninfo_all_blocks=1 00:19:05.519 --rc geninfo_unexecuted_blocks=1 00:19:05.519 00:19:05.519 ' 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:05.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.519 --rc genhtml_branch_coverage=1 00:19:05.519 --rc genhtml_function_coverage=1 00:19:05.519 --rc genhtml_legend=1 00:19:05.519 --rc geninfo_all_blocks=1 00:19:05.519 --rc geninfo_unexecuted_blocks=1 00:19:05.519 00:19:05.519 ' 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:05.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.519 --rc genhtml_branch_coverage=1 00:19:05.519 --rc genhtml_function_coverage=1 00:19:05.519 --rc genhtml_legend=1 00:19:05.519 --rc geninfo_all_blocks=1 00:19:05.519 --rc geninfo_unexecuted_blocks=1 00:19:05.519 00:19:05.519 ' 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:05.519 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@456 -- # nvmf_veth_init 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:05.519 Cannot find device "nvmf_init_br" 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:05.519 Cannot find device "nvmf_init_br2" 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:05.519 Cannot find device "nvmf_tgt_br" 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:05.519 Cannot find device "nvmf_tgt_br2" 00:19:05.519 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:19:05.520 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:05.520 Cannot find device "nvmf_init_br" 00:19:05.520 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:19:05.520 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:05.520 Cannot find device "nvmf_init_br2" 00:19:05.520 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:19:05.520 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:05.520 Cannot find device "nvmf_tgt_br" 00:19:05.520 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:19:05.520 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:05.520 Cannot find device "nvmf_tgt_br2" 00:19:05.520 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:19:05.520 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:05.520 Cannot find device "nvmf_br" 00:19:05.520 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:19:05.520 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:05.778 Cannot find device "nvmf_init_if" 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:05.778 Cannot find device "nvmf_init_if2" 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:05.778 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:05.778 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:05.778 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:05.779 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:05.779 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:05.779 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:05.779 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:05.779 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:05.779 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:05.779 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:05.779 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:05.779 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:05.779 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:05.779 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:19:05.779 00:19:05.779 --- 10.0.0.3 ping statistics --- 00:19:05.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.779 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:19:05.779 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:05.779 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:05.779 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:19:05.779 00:19:05.779 --- 10.0.0.4 ping statistics --- 00:19:05.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.779 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:19:05.779 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:05.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:05.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:19:05.779 00:19:05.779 --- 10.0.0.1 ping statistics --- 00:19:05.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.779 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:19:05.779 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:05.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:05.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:19:05.779 00:19:05.779 --- 10.0.0.2 ping statistics --- 00:19:05.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.779 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:19:05.779 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:05.779 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # return 0 00:19:05.779 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:05.779 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:05.779 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:05.779 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:05.779 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:05.779 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:05.779 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:06.038 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:19:06.038 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:06.038 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:06.038 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:06.038 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=89784 00:19:06.038 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:06.038 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 89784 00:19:06.038 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 89784 ']' 00:19:06.038 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.038 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:06.038 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.038 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:06.038 18:36:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:06.038 [2024-12-08 18:36:23.794849] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:06.038 [2024-12-08 18:36:23.794932] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.038 [2024-12-08 18:36:23.937710] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:06.297 [2024-12-08 18:36:24.027807] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.297 [2024-12-08 18:36:24.027888] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.297 [2024-12-08 18:36:24.027903] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.297 [2024-12-08 18:36:24.027915] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.297 [2024-12-08 18:36:24.027926] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.297 [2024-12-08 18:36:24.028083] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.297 [2024-12-08 18:36:24.028234] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:06.297 [2024-12-08 18:36:24.028244] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.297 [2024-12-08 18:36:24.108840] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:06.297 18:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:06.297 18:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:06.297 18:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:06.297 18:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:06.297 18:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:06.297 18:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.297 18:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:06.865 [2024-12-08 18:36:24.514645] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.865 18:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:06.865 Malloc0 00:19:06.865 18:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:07.124 18:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:07.389 18:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:07.648 [2024-12-08 18:36:25.507665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:07.648 18:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:07.907 [2024-12-08 18:36:25.719920] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:07.907 18:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:08.166 [2024-12-08 18:36:26.012422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:08.166 18:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=89834 00:19:08.166 18:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:19:08.166 18:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:08.166 18:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 89834 /var/tmp/bdevperf.sock 00:19:08.166 18:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 89834 ']' 00:19:08.166 18:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:08.166 18:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:08.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:08.166 18:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:08.166 18:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:08.166 18:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:09.558 18:36:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:09.558 18:36:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:09.558 18:36:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:09.558 NVMe0n1 00:19:09.558 18:36:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:09.817 00:19:09.817 18:36:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=89863 00:19:09.817 18:36:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:09.817 18:36:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:19:10.756 18:36:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:11.325 18:36:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:19:14.614 18:36:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:14.614 00:19:14.614 18:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:14.882 18:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:19:18.178 18:36:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:18.178 [2024-12-08 18:36:35.810210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:18.178 18:36:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:19:19.114 18:36:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:19.372 18:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 89863 00:19:25.951 { 00:19:25.951 "results": [ 00:19:25.951 { 00:19:25.951 "job": "NVMe0n1", 00:19:25.951 "core_mask": "0x1", 00:19:25.951 "workload": "verify", 00:19:25.951 "status": "finished", 00:19:25.951 "verify_range": { 00:19:25.951 "start": 0, 00:19:25.951 "length": 16384 00:19:25.951 }, 00:19:25.951 "queue_depth": 128, 00:19:25.951 "io_size": 4096, 00:19:25.951 "runtime": 15.010696, 00:19:25.951 "iops": 9998.070709046402, 00:19:25.951 "mibps": 39.05496370721251, 00:19:25.951 "io_failed": 3597, 00:19:25.951 "io_timeout": 0, 00:19:25.951 "avg_latency_us": 12477.413694662586, 00:19:25.951 "min_latency_us": 562.2690909090909, 00:19:25.951 "max_latency_us": 16324.421818181818 00:19:25.951 } 00:19:25.951 ], 00:19:25.951 "core_count": 1 00:19:25.951 } 00:19:25.951 18:36:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 89834 00:19:25.951 18:36:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 89834 ']' 00:19:25.951 18:36:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 89834 00:19:25.951 18:36:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:19:25.951 18:36:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:25.951 18:36:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89834 00:19:25.951 killing process with pid 89834 00:19:25.951 18:36:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:25.951 18:36:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:25.951 18:36:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89834' 00:19:25.951 18:36:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 89834 00:19:25.951 18:36:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 89834 00:19:25.951 18:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:25.951 [2024-12-08 18:36:26.071968] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:25.951 [2024-12-08 18:36:26.072045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89834 ] 00:19:25.951 [2024-12-08 18:36:26.197991] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.951 [2024-12-08 18:36:26.256747] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.951 [2024-12-08 18:36:26.309917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:25.951 Running I/O for 15 seconds... 00:19:25.951 9799.00 IOPS, 38.28 MiB/s [2024-12-08T18:36:43.881Z] [2024-12-08 18:36:28.938996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.951 [2024-12-08 18:36:28.939047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.951 [2024-12-08 18:36:28.939076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.951 [2024-12-08 18:36:28.939092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.939107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.952 [2024-12-08 18:36:28.939120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.939133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.952 [2024-12-08 18:36:28.939145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.939159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:90744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.952 [2024-12-08 18:36:28.939172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.939185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:90752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.952 [2024-12-08 18:36:28.939198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.939211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.952 [2024-12-08 18:36:28.939223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.939236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.952 [2024-12-08 18:36:28.939249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.939263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.952 [2024-12-08 18:36:28.939275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.939289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.952 [2024-12-08 18:36:28.939302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.939316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:90136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.952 [2024-12-08 18:36:28.939349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.939364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:90144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.952 [2024-12-08 18:36:28.939378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.939392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:90152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.952 [2024-12-08 18:36:28.939419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.939453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.952 [2024-12-08 18:36:28.939466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.939480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:90168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.952 [2024-12-08 18:36:28.939506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.939520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.952 [2024-12-08 18:36:28.939533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.939547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:90184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.952 [2024-12-08 18:36:28.939560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.939583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.952 [2024-12-08 18:36:28.939596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.939610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:90792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.952 [2024-12-08 18:36:28.939622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.939636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.952 [2024-12-08 18:36:28.939648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.939661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.952 [2024-12-08 18:36:28.939674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.939697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:90816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.952 [2024-12-08 18:36:28.939712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.939726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.952 [2024-12-08 18:36:28.939740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.939763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.952 [2024-12-08 18:36:28.939814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.939843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.952 [2024-12-08 18:36:28.939856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.939886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.952 [2024-12-08 18:36:28.939899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.939914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:90856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.952 [2024-12-08 18:36:28.939927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.939942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:90864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.952 [2024-12-08 18:36:28.939955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.939969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.952 [2024-12-08 18:36:28.939981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.939997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.952 [2024-12-08 18:36:28.940010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.940025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.952 [2024-12-08 18:36:28.940039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.940054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.952 [2024-12-08 18:36:28.940068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.940083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.952 [2024-12-08 18:36:28.940096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.940114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:90208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.952 [2024-12-08 18:36:28.940128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.940143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.952 [2024-12-08 18:36:28.940158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.940181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.952 [2024-12-08 18:36:28.940214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.940230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:90232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.952 [2024-12-08 18:36:28.940244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.940259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:90240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.952 [2024-12-08 18:36:28.940272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.940287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:90248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.952 [2024-12-08 18:36:28.940300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.940315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.952 [2024-12-08 18:36:28.940329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.952 [2024-12-08 18:36:28.940357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:90264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.952 [2024-12-08 18:36:28.940369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.940383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:90272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.940395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.940421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:90280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.940434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.940459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.940488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.940501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.940514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.940529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:90304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.940542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.940556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:90312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.940569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.940584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:90320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.940598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.940612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:90328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.940636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.940654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:90336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.940668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.940682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:90344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.940696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.940709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:90352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.940723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.940738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:90360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.940751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.940765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.940777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.940793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.940805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.940835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.940847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.940861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:90904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.953 [2024-12-08 18:36:28.940874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.940887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.953 [2024-12-08 18:36:28.940900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.940914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.953 [2024-12-08 18:36:28.940943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.940957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.953 [2024-12-08 18:36:28.940971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.940984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:90936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.953 [2024-12-08 18:36:28.940997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.941017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.953 [2024-12-08 18:36:28.941030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.941044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.953 [2024-12-08 18:36:28.941056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.941087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.953 [2024-12-08 18:36:28.941100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.941115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:90392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.941127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.941146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:90400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.941159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.941173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.941186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.941200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:90416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.941212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.941226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:90424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.941239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.941252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:90432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.941265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.941278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.941291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.941306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.941319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.941332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:90456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.941345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.941359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:90464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.941389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.941412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.941425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.941450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.941464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.941480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:90488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.941493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.941507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:90496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.941520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.941535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.941548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.953 [2024-12-08 18:36:28.941566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:90512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.953 [2024-12-08 18:36:28.941580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.941594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.954 [2024-12-08 18:36:28.941607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.941625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.954 [2024-12-08 18:36:28.941639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.941653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.954 [2024-12-08 18:36:28.941667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.941680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.954 [2024-12-08 18:36:28.941693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.941708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:91000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.954 [2024-12-08 18:36:28.941721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.941735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:91008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.954 [2024-12-08 18:36:28.941749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.941769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.954 [2024-12-08 18:36:28.941782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.941797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.954 [2024-12-08 18:36:28.941809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.941823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.954 [2024-12-08 18:36:28.941836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.941850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.954 [2024-12-08 18:36:28.941863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.941877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:90536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.954 [2024-12-08 18:36:28.941890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.941903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:90544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.954 [2024-12-08 18:36:28.941916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.941930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:90552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.954 [2024-12-08 18:36:28.941943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.941957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.954 [2024-12-08 18:36:28.941970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.941985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.954 [2024-12-08 18:36:28.941999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.942016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.954 [2024-12-08 18:36:28.942030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.942043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.954 [2024-12-08 18:36:28.942055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.942073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:90592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.954 [2024-12-08 18:36:28.942086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.942100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:90600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.954 [2024-12-08 18:36:28.942113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.942134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.954 [2024-12-08 18:36:28.942148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.942162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.954 [2024-12-08 18:36:28.942175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.942189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:90624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.954 [2024-12-08 18:36:28.942201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.942215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.954 [2024-12-08 18:36:28.942227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.942242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.954 [2024-12-08 18:36:28.942255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.942268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:90648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.954 [2024-12-08 18:36:28.942282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.942296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.954 [2024-12-08 18:36:28.942310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.942324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:90664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.954 [2024-12-08 18:36:28.942336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.942350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:90672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.954 [2024-12-08 18:36:28.942363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.942377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.954 [2024-12-08 18:36:28.942389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.942413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.954 [2024-12-08 18:36:28.942428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.954 [2024-12-08 18:36:28.942441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.954 [2024-12-08 18:36:28.942454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.955 [2024-12-08 18:36:28.942471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d43c0 is same with the state(6) to be set 00:19:25.955 [2024-12-08 18:36:28.942494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.955 [2024-12-08 18:36:28.942504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.955 [2024-12-08 18:36:28.942514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90704 len:8 PRP1 0x0 PRP2 0x0 00:19:25.955 [2024-12-08 18:36:28.942532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.955 [2024-12-08 18:36:28.942546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.955 [2024-12-08 18:36:28.942556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.955 [2024-12-08 18:36:28.942566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91032 len:8 PRP1 0x0 PRP2 0x0 00:19:25.955 [2024-12-08 18:36:28.942578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.955 [2024-12-08 18:36:28.942590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.955 [2024-12-08 18:36:28.942600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.955 [2024-12-08 18:36:28.942609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91040 len:8 PRP1 0x0 PRP2 0x0 00:19:25.955 [2024-12-08 18:36:28.942621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.955 [2024-12-08 18:36:28.942633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.955 [2024-12-08 18:36:28.942643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.955 [2024-12-08 18:36:28.942652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91048 len:8 PRP1 0x0 PRP2 0x0 00:19:25.955 [2024-12-08 18:36:28.942664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.955 [2024-12-08 18:36:28.942676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.955 [2024-12-08 18:36:28.942685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.955 [2024-12-08 18:36:28.942695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91056 len:8 PRP1 0x0 PRP2 0x0 00:19:25.955 [2024-12-08 18:36:28.942707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.955 [2024-12-08 18:36:28.942720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.955 [2024-12-08 18:36:28.942729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.955 [2024-12-08 18:36:28.942749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91064 len:8 PRP1 0x0 PRP2 0x0 00:19:25.955 [2024-12-08 18:36:28.942760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.955 [2024-12-08 18:36:28.942773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.955 [2024-12-08 18:36:28.942782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.955 [2024-12-08 18:36:28.942804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91072 len:8 PRP1 0x0 PRP2 0x0 00:19:25.955 [2024-12-08 18:36:28.942815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.955 [2024-12-08 18:36:28.942828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.955 [2024-12-08 18:36:28.942837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.955 [2024-12-08 18:36:28.942847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91080 len:8 PRP1 0x0 PRP2 0x0 00:19:25.955 [2024-12-08 18:36:28.942870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.955 [2024-12-08 18:36:28.942894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.955 [2024-12-08 18:36:28.942904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.955 [2024-12-08 18:36:28.942913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91088 len:8 PRP1 0x0 PRP2 0x0 00:19:25.955 [2024-12-08 18:36:28.942930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.955 [2024-12-08 18:36:28.942943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.955 [2024-12-08 18:36:28.942952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.955 [2024-12-08 18:36:28.942961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91096 len:8 PRP1 0x0 PRP2 0x0 00:19:25.955 [2024-12-08 18:36:28.942973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.955 [2024-12-08 18:36:28.942986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.955 [2024-12-08 18:36:28.942995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.955 [2024-12-08 18:36:28.943004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91104 len:8 PRP1 0x0 PRP2 0x0 00:19:25.955 [2024-12-08 18:36:28.943016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.955 [2024-12-08 18:36:28.943028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.955 [2024-12-08 18:36:28.943037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.955 [2024-12-08 18:36:28.943048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91112 len:8 PRP1 0x0 PRP2 0x0 00:19:25.955 [2024-12-08 18:36:28.943060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.955 [2024-12-08 18:36:28.943072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.955 [2024-12-08 18:36:28.943081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.955 [2024-12-08 18:36:28.943090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91120 len:8 PRP1 0x0 PRP2 0x0 00:19:25.955 [2024-12-08 18:36:28.943103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.955 [2024-12-08 18:36:28.943115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.955 [2024-12-08 18:36:28.943124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.955 [2024-12-08 18:36:28.943134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91128 len:8 PRP1 0x0 PRP2 0x0 00:19:25.955 [2024-12-08 18:36:28.943146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.955 [2024-12-08 18:36:28.943158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.955 [2024-12-08 18:36:28.943167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.955 [2024-12-08 18:36:28.943176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91136 len:8 PRP1 0x0 PRP2 0x0 00:19:25.955 [2024-12-08 18:36:28.943189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.955 [2024-12-08 18:36:28.943201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.955 [2024-12-08 18:36:28.943211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.955 [2024-12-08 18:36:28.943227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91144 len:8 PRP1 0x0 PRP2 0x0 00:19:25.955 [2024-12-08 18:36:28.943252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.955 [2024-12-08 18:36:28.943264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.955 [2024-12-08 18:36:28.943273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.955 [2024-12-08 18:36:28.943283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91152 len:8 PRP1 0x0 PRP2 0x0 00:19:25.955 [2024-12-08 18:36:28.943299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.955 [2024-12-08 18:36:28.943353] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8d43c0 was disconnected and freed. reset controller. 00:19:25.955 [2024-12-08 18:36:28.943371] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:25.955 [2024-12-08 18:36:28.943462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.955 [2024-12-08 18:36:28.943485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.955 [2024-12-08 18:36:28.943502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.955 [2024-12-08 18:36:28.943514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.955 [2024-12-08 18:36:28.943527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.955 [2024-12-08 18:36:28.943539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.955 [2024-12-08 18:36:28.943552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.955 [2024-12-08 18:36:28.943572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.955 [2024-12-08 18:36:28.943585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:25.955 [2024-12-08 18:36:28.943634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b3f10 (9): Bad file descriptor 00:19:25.955 [2024-12-08 18:36:28.946882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:25.955 [2024-12-08 18:36:28.978392] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:25.955 9885.00 IOPS, 38.61 MiB/s [2024-12-08T18:36:43.885Z] 10041.67 IOPS, 39.23 MiB/s [2024-12-08T18:36:43.885Z] 10100.50 IOPS, 39.46 MiB/s [2024-12-08T18:36:43.885Z] [2024-12-08 18:36:32.556018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.956 [2024-12-08 18:36:32.556078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.956 [2024-12-08 18:36:32.556125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.956 [2024-12-08 18:36:32.556157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.956 [2024-12-08 18:36:32.556218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.956 [2024-12-08 18:36:32.556243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.956 [2024-12-08 18:36:32.556267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.956 [2024-12-08 18:36:32.556293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.956 [2024-12-08 18:36:32.556318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.956 [2024-12-08 18:36:32.556344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.956 [2024-12-08 18:36:32.556370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.956 [2024-12-08 18:36:32.556394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.956 [2024-12-08 18:36:32.556434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.956 [2024-12-08 18:36:32.556460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.956 [2024-12-08 18:36:32.556485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.956 [2024-12-08 18:36:32.556510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.956 [2024-12-08 18:36:32.556535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.956 [2024-12-08 18:36:32.556571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.956 [2024-12-08 18:36:32.556599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.956 [2024-12-08 18:36:32.556623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.956 [2024-12-08 18:36:32.556647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.956 [2024-12-08 18:36:32.556672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.956 [2024-12-08 18:36:32.556696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.956 [2024-12-08 18:36:32.556721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.956 [2024-12-08 18:36:32.556745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.956 [2024-12-08 18:36:32.556770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.956 [2024-12-08 18:36:32.556794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.956 [2024-12-08 18:36:32.556820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.956 [2024-12-08 18:36:32.556844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.956 [2024-12-08 18:36:32.556876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.956 [2024-12-08 18:36:32.556901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.956 [2024-12-08 18:36:32.556925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.956 [2024-12-08 18:36:32.556950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.956 [2024-12-08 18:36:32.556974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.556989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.956 [2024-12-08 18:36:32.557001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.956 [2024-12-08 18:36:32.557014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.957 [2024-12-08 18:36:32.557026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.957 [2024-12-08 18:36:32.557051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.957 [2024-12-08 18:36:32.557088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.957 [2024-12-08 18:36:32.557114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.957 [2024-12-08 18:36:32.557137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.957 [2024-12-08 18:36:32.557162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.957 [2024-12-08 18:36:32.557187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.957 [2024-12-08 18:36:32.557218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.957 [2024-12-08 18:36:32.557243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.957 [2024-12-08 18:36:32.557269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.957 [2024-12-08 18:36:32.557293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.957 [2024-12-08 18:36:32.557318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.957 [2024-12-08 18:36:32.557342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.957 [2024-12-08 18:36:32.557367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.957 [2024-12-08 18:36:32.557392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.957 [2024-12-08 18:36:32.557431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.957 [2024-12-08 18:36:32.557457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.957 [2024-12-08 18:36:32.557483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.957 [2024-12-08 18:36:32.557510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.957 [2024-12-08 18:36:32.557537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.957 [2024-12-08 18:36:32.557569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.957 [2024-12-08 18:36:32.557594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.957 [2024-12-08 18:36:32.557619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.957 [2024-12-08 18:36:32.557643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.957 [2024-12-08 18:36:32.557668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.957 [2024-12-08 18:36:32.557693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.957 [2024-12-08 18:36:32.557719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.957 [2024-12-08 18:36:32.557744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.957 [2024-12-08 18:36:32.557769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.957 [2024-12-08 18:36:32.557793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.957 [2024-12-08 18:36:32.557818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.957 [2024-12-08 18:36:32.557843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.957 [2024-12-08 18:36:32.557874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.957 [2024-12-08 18:36:32.557900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.957 [2024-12-08 18:36:32.557925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.957 [2024-12-08 18:36:32.557952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.957 [2024-12-08 18:36:32.557976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.557989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.957 [2024-12-08 18:36:32.558001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.558014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.957 [2024-12-08 18:36:32.558026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.558040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.957 [2024-12-08 18:36:32.558052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.957 [2024-12-08 18:36:32.558065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.958 [2024-12-08 18:36:32.558076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.958 [2024-12-08 18:36:32.558102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.958 [2024-12-08 18:36:32.558127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.958 [2024-12-08 18:36:32.558151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.958 [2024-12-08 18:36:32.558177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.958 [2024-12-08 18:36:32.558208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.958 [2024-12-08 18:36:32.558233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.958 [2024-12-08 18:36:32.558259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.958 [2024-12-08 18:36:32.558297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.958 [2024-12-08 18:36:32.558322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.958 [2024-12-08 18:36:32.558347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.958 [2024-12-08 18:36:32.558371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.958 [2024-12-08 18:36:32.558396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.958 [2024-12-08 18:36:32.558432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.958 [2024-12-08 18:36:32.558457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.958 [2024-12-08 18:36:32.558482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.958 [2024-12-08 18:36:32.558506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.958 [2024-12-08 18:36:32.558531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.958 [2024-12-08 18:36:32.558563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.958 [2024-12-08 18:36:32.558589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.958 [2024-12-08 18:36:32.558613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.958 [2024-12-08 18:36:32.558638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.958 [2024-12-08 18:36:32.558664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.958 [2024-12-08 18:36:32.558698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.958 [2024-12-08 18:36:32.558722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.958 [2024-12-08 18:36:32.558753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.958 [2024-12-08 18:36:32.558778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.958 [2024-12-08 18:36:32.558802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.958 [2024-12-08 18:36:32.558826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.958 [2024-12-08 18:36:32.558851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.958 [2024-12-08 18:36:32.558884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.958 [2024-12-08 18:36:32.558910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.958 [2024-12-08 18:36:32.558934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.958 [2024-12-08 18:36:32.558960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.958 [2024-12-08 18:36:32.558984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.558997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.958 [2024-12-08 18:36:32.559008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.559021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.958 [2024-12-08 18:36:32.559033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.559045] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b7310 is same with the state(6) to be set 00:19:25.958 [2024-12-08 18:36:32.559060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.958 [2024-12-08 18:36:32.559069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.958 [2024-12-08 18:36:32.559078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1720 len:8 PRP1 0x0 PRP2 0x0 00:19:25.958 [2024-12-08 18:36:32.559093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.958 [2024-12-08 18:36:32.559106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.958 [2024-12-08 18:36:32.559114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.958 [2024-12-08 18:36:32.559123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:8 PRP1 0x0 PRP2 0x0 00:19:25.959 [2024-12-08 18:36:32.559133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.959 [2024-12-08 18:36:32.559149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.959 [2024-12-08 18:36:32.559157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.959 [2024-12-08 18:36:32.559166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2120 len:8 PRP1 0x0 PRP2 0x0 00:19:25.959 [2024-12-08 18:36:32.559177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.959 [2024-12-08 18:36:32.559188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.959 [2024-12-08 18:36:32.559198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.959 [2024-12-08 18:36:32.559213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2128 len:8 PRP1 0x0 PRP2 0x0 00:19:25.959 [2024-12-08 18:36:32.559238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.959 [2024-12-08 18:36:32.559251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.959 [2024-12-08 18:36:32.559259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.959 [2024-12-08 18:36:32.559268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2136 len:8 PRP1 0x0 PRP2 0x0 00:19:25.959 [2024-12-08 18:36:32.559279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.959 [2024-12-08 18:36:32.559290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.959 [2024-12-08 18:36:32.559299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.959 [2024-12-08 18:36:32.559307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:8 PRP1 0x0 PRP2 0x0 00:19:25.959 [2024-12-08 18:36:32.559318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.959 [2024-12-08 18:36:32.559330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.959 [2024-12-08 18:36:32.559339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.959 [2024-12-08 18:36:32.559347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2152 len:8 PRP1 0x0 PRP2 0x0 00:19:25.959 [2024-12-08 18:36:32.559358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.959 [2024-12-08 18:36:32.559369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.959 [2024-12-08 18:36:32.559378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.959 [2024-12-08 18:36:32.559386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2160 len:8 PRP1 0x0 PRP2 0x0 00:19:25.959 [2024-12-08 18:36:32.559397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.959 [2024-12-08 18:36:32.559421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.959 [2024-12-08 18:36:32.559430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.959 [2024-12-08 18:36:32.559439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2168 len:8 PRP1 0x0 PRP2 0x0 00:19:25.959 [2024-12-08 18:36:32.559455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.959 [2024-12-08 18:36:32.559466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.959 [2024-12-08 18:36:32.559475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.959 [2024-12-08 18:36:32.559484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:8 PRP1 0x0 PRP2 0x0 00:19:25.959 [2024-12-08 18:36:32.559495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.959 [2024-12-08 18:36:32.559507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.959 [2024-12-08 18:36:32.559516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.959 [2024-12-08 18:36:32.559524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2184 len:8 PRP1 0x0 PRP2 0x0 00:19:25.959 [2024-12-08 18:36:32.559535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.959 [2024-12-08 18:36:32.559547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.959 [2024-12-08 18:36:32.559559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.959 [2024-12-08 18:36:32.559571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2192 len:8 PRP1 0x0 PRP2 0x0 00:19:25.959 [2024-12-08 18:36:32.559583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.959 [2024-12-08 18:36:32.559596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.959 [2024-12-08 18:36:32.559605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.959 [2024-12-08 18:36:32.559614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2200 len:8 PRP1 0x0 PRP2 0x0 00:19:25.959 [2024-12-08 18:36:32.559625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.959 [2024-12-08 18:36:32.559636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.959 [2024-12-08 18:36:32.559644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.959 [2024-12-08 18:36:32.559671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:8 PRP1 0x0 PRP2 0x0 00:19:25.959 [2024-12-08 18:36:32.559683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.959 [2024-12-08 18:36:32.559695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.959 [2024-12-08 18:36:32.559705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.959 [2024-12-08 18:36:32.559713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2216 len:8 PRP1 0x0 PRP2 0x0 00:19:25.959 [2024-12-08 18:36:32.559734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.959 [2024-12-08 18:36:32.559746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.959 [2024-12-08 18:36:32.559754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.959 [2024-12-08 18:36:32.559764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2224 len:8 PRP1 0x0 PRP2 0x0 00:19:25.959 [2024-12-08 18:36:32.559776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.959 [2024-12-08 18:36:32.559800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.959 [2024-12-08 18:36:32.559809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.959 [2024-12-08 18:36:32.559826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2232 len:8 PRP1 0x0 PRP2 0x0 00:19:25.959 [2024-12-08 18:36:32.559843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.959 [2024-12-08 18:36:32.559896] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8b7310 was disconnected and freed. reset controller. 00:19:25.959 [2024-12-08 18:36:32.559914] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:19:25.959 [2024-12-08 18:36:32.559966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.959 [2024-12-08 18:36:32.559986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.959 [2024-12-08 18:36:32.560001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.959 [2024-12-08 18:36:32.560013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.959 [2024-12-08 18:36:32.560036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.959 [2024-12-08 18:36:32.560050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.959 [2024-12-08 18:36:32.560063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.959 [2024-12-08 18:36:32.560075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.959 [2024-12-08 18:36:32.560086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:25.959 [2024-12-08 18:36:32.560140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b3f10 (9): Bad file descriptor 00:19:25.959 [2024-12-08 18:36:32.563269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:25.959 [2024-12-08 18:36:32.596371] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:25.959 10049.40 IOPS, 39.26 MiB/s [2024-12-08T18:36:43.889Z] 10078.50 IOPS, 39.37 MiB/s [2024-12-08T18:36:43.889Z] 10113.00 IOPS, 39.50 MiB/s [2024-12-08T18:36:43.889Z] 10150.88 IOPS, 39.65 MiB/s [2024-12-08T18:36:43.889Z] 10165.22 IOPS, 39.71 MiB/s [2024-12-08T18:36:43.889Z] [2024-12-08 18:36:37.032785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:115048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.959 [2024-12-08 18:36:37.032836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.959 [2024-12-08 18:36:37.032871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:115056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.959 [2024-12-08 18:36:37.032886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.959 [2024-12-08 18:36:37.032901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:115064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.959 [2024-12-08 18:36:37.032913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.959 [2024-12-08 18:36:37.032927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:115072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.959 [2024-12-08 18:36:37.032939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.959 [2024-12-08 18:36:37.032952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:115080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.959 [2024-12-08 18:36:37.032964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.959 [2024-12-08 18:36:37.032978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:115088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.959 [2024-12-08 18:36:37.032990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.959 [2024-12-08 18:36:37.033003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:115096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.960 [2024-12-08 18:36:37.033015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:115104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.960 [2024-12-08 18:36:37.033041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:114600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.960 [2024-12-08 18:36:37.033089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:114608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.960 [2024-12-08 18:36:37.033118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:114616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.960 [2024-12-08 18:36:37.033144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:114624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.960 [2024-12-08 18:36:37.033170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:114632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.960 [2024-12-08 18:36:37.033195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:114640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.960 [2024-12-08 18:36:37.033221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:114648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.960 [2024-12-08 18:36:37.033247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:114656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.960 [2024-12-08 18:36:37.033272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:114664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.960 [2024-12-08 18:36:37.033297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:114672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.960 [2024-12-08 18:36:37.033324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:114680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.960 [2024-12-08 18:36:37.033348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:114688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.960 [2024-12-08 18:36:37.033372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:114696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.960 [2024-12-08 18:36:37.033396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:114704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.960 [2024-12-08 18:36:37.033460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:114712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.960 [2024-12-08 18:36:37.033487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:114720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.960 [2024-12-08 18:36:37.033513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:115112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.960 [2024-12-08 18:36:37.033539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:115120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.960 [2024-12-08 18:36:37.033582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:115128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.960 [2024-12-08 18:36:37.033608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:115136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.960 [2024-12-08 18:36:37.033635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:115144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.960 [2024-12-08 18:36:37.033661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:115152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.960 [2024-12-08 18:36:37.033687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:115160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.960 [2024-12-08 18:36:37.033713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:115168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.960 [2024-12-08 18:36:37.033739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.960 [2024-12-08 18:36:37.033766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:115184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.960 [2024-12-08 18:36:37.033794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:115192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.960 [2024-12-08 18:36:37.033828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.960 [2024-12-08 18:36:37.033856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:115208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.960 [2024-12-08 18:36:37.033898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:115216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.960 [2024-12-08 18:36:37.033923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:115224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.960 [2024-12-08 18:36:37.033949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:115232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.960 [2024-12-08 18:36:37.033975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.033988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:114728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.960 [2024-12-08 18:36:37.034000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.960 [2024-12-08 18:36:37.034013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:114736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.961 [2024-12-08 18:36:37.034026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:114744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.961 [2024-12-08 18:36:37.034052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:114752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.961 [2024-12-08 18:36:37.034078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:114760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.961 [2024-12-08 18:36:37.034103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:114768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.961 [2024-12-08 18:36:37.034129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:114776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.961 [2024-12-08 18:36:37.034161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:114784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.961 [2024-12-08 18:36:37.034187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.961 [2024-12-08 18:36:37.034213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:114800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.961 [2024-12-08 18:36:37.034241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.961 [2024-12-08 18:36:37.034267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.961 [2024-12-08 18:36:37.034293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:114824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.961 [2024-12-08 18:36:37.034320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:114832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.961 [2024-12-08 18:36:37.034346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:114840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.961 [2024-12-08 18:36:37.034372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:114848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.961 [2024-12-08 18:36:37.034397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:115240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.961 [2024-12-08 18:36:37.034424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:115248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.961 [2024-12-08 18:36:37.034460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:115256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.961 [2024-12-08 18:36:37.034488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:115264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.961 [2024-12-08 18:36:37.034531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:115272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.961 [2024-12-08 18:36:37.034557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:115280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.961 [2024-12-08 18:36:37.034582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:115288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.961 [2024-12-08 18:36:37.034609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:115296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.961 [2024-12-08 18:36:37.034635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.961 [2024-12-08 18:36:37.034660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:114864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.961 [2024-12-08 18:36:37.034687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:114872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.961 [2024-12-08 18:36:37.034713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:114880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.961 [2024-12-08 18:36:37.034739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:114888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.961 [2024-12-08 18:36:37.034766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:114896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.961 [2024-12-08 18:36:37.034791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.961 [2024-12-08 18:36:37.034818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.961 [2024-12-08 18:36:37.034850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:115304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.961 [2024-12-08 18:36:37.034878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:115312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.961 [2024-12-08 18:36:37.034904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:115320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.961 [2024-12-08 18:36:37.034935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:115328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.961 [2024-12-08 18:36:37.034961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.034974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:115336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.961 [2024-12-08 18:36:37.034987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.035042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:115344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.961 [2024-12-08 18:36:37.035064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.035079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:115352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.961 [2024-12-08 18:36:37.035092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.035107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:115360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.961 [2024-12-08 18:36:37.035120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.961 [2024-12-08 18:36:37.035133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:114920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.962 [2024-12-08 18:36:37.035146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:114928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.962 [2024-12-08 18:36:37.035172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.962 [2024-12-08 18:36:37.035198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.962 [2024-12-08 18:36:37.035224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.962 [2024-12-08 18:36:37.035261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.962 [2024-12-08 18:36:37.035288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.962 [2024-12-08 18:36:37.035314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.962 [2024-12-08 18:36:37.035340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:115368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.962 [2024-12-08 18:36:37.035366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:115376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.962 [2024-12-08 18:36:37.035392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:115384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.962 [2024-12-08 18:36:37.035430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:115392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.962 [2024-12-08 18:36:37.035456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:115400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.962 [2024-12-08 18:36:37.035482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:115408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.962 [2024-12-08 18:36:37.035507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:115416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.962 [2024-12-08 18:36:37.035533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:115424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.962 [2024-12-08 18:36:37.035558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:115432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.962 [2024-12-08 18:36:37.035585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:115440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.962 [2024-12-08 18:36:37.035627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:115448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.962 [2024-12-08 18:36:37.035653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:115456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.962 [2024-12-08 18:36:37.035679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:115464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.962 [2024-12-08 18:36:37.035706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:115472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.962 [2024-12-08 18:36:37.035732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:115480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.962 [2024-12-08 18:36:37.035758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:115488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.962 [2024-12-08 18:36:37.035804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:114984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.962 [2024-12-08 18:36:37.035833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.962 [2024-12-08 18:36:37.035859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:115000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.962 [2024-12-08 18:36:37.035885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.962 [2024-12-08 18:36:37.035911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.962 [2024-12-08 18:36:37.035938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:115024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.962 [2024-12-08 18:36:37.035971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.035985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:115032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.962 [2024-12-08 18:36:37.035998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.036012] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b7d30 is same with the state(6) to be set 00:19:25.962 [2024-12-08 18:36:37.036028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.962 [2024-12-08 18:36:37.036038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.962 [2024-12-08 18:36:37.036048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115040 len:8 PRP1 0x0 PRP2 0x0 00:19:25.962 [2024-12-08 18:36:37.036064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.036077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.962 [2024-12-08 18:36:37.036087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.962 [2024-12-08 18:36:37.036097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115496 len:8 PRP1 0x0 PRP2 0x0 00:19:25.962 [2024-12-08 18:36:37.036109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.036122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.962 [2024-12-08 18:36:37.036131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.962 [2024-12-08 18:36:37.036140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115504 len:8 PRP1 0x0 PRP2 0x0 00:19:25.962 [2024-12-08 18:36:37.036152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.036164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.962 [2024-12-08 18:36:37.036172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.962 [2024-12-08 18:36:37.036192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115512 len:8 PRP1 0x0 PRP2 0x0 00:19:25.962 [2024-12-08 18:36:37.036203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.962 [2024-12-08 18:36:37.036215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.962 [2024-12-08 18:36:37.036224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.962 [2024-12-08 18:36:37.036233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115520 len:8 PRP1 0x0 PRP2 0x0 00:19:25.963 [2024-12-08 18:36:37.036245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.963 [2024-12-08 18:36:37.036257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.963 [2024-12-08 18:36:37.036267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.963 [2024-12-08 18:36:37.036275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115528 len:8 PRP1 0x0 PRP2 0x0 00:19:25.963 [2024-12-08 18:36:37.036287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.963 [2024-12-08 18:36:37.036299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.963 [2024-12-08 18:36:37.036308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.963 [2024-12-08 18:36:37.036339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115536 len:8 PRP1 0x0 PRP2 0x0 00:19:25.963 [2024-12-08 18:36:37.036352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.963 [2024-12-08 18:36:37.036364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.963 [2024-12-08 18:36:37.036374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.963 [2024-12-08 18:36:37.036383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115544 len:8 PRP1 0x0 PRP2 0x0 00:19:25.963 [2024-12-08 18:36:37.036395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.963 [2024-12-08 18:36:37.036407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.963 [2024-12-08 18:36:37.036426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.963 [2024-12-08 18:36:37.036438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115552 len:8 PRP1 0x0 PRP2 0x0 00:19:25.963 [2024-12-08 18:36:37.036455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.963 [2024-12-08 18:36:37.036468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.963 [2024-12-08 18:36:37.036477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.963 [2024-12-08 18:36:37.036486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115560 len:8 PRP1 0x0 PRP2 0x0 00:19:25.963 [2024-12-08 18:36:37.036498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.963 [2024-12-08 18:36:37.036520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.963 [2024-12-08 18:36:37.036530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.963 [2024-12-08 18:36:37.036540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115568 len:8 PRP1 0x0 PRP2 0x0 00:19:25.963 [2024-12-08 18:36:37.036563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.963 [2024-12-08 18:36:37.036575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.963 [2024-12-08 18:36:37.036585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.963 [2024-12-08 18:36:37.036594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115576 len:8 PRP1 0x0 PRP2 0x0 00:19:25.963 [2024-12-08 18:36:37.036607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.963 [2024-12-08 18:36:37.036619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.963 [2024-12-08 18:36:37.036628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.963 [2024-12-08 18:36:37.036637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115584 len:8 PRP1 0x0 PRP2 0x0 00:19:25.963 [2024-12-08 18:36:37.036649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.963 [2024-12-08 18:36:37.036663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.963 [2024-12-08 18:36:37.036672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.963 [2024-12-08 18:36:37.036682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115592 len:8 PRP1 0x0 PRP2 0x0 00:19:25.963 [2024-12-08 18:36:37.036693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.963 [2024-12-08 18:36:37.036712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.963 [2024-12-08 18:36:37.036724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.963 [2024-12-08 18:36:37.036733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115600 len:8 PRP1 0x0 PRP2 0x0 00:19:25.963 [2024-12-08 18:36:37.036745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.963 [2024-12-08 18:36:37.036757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.963 [2024-12-08 18:36:37.036766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.963 [2024-12-08 18:36:37.036780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115608 len:8 PRP1 0x0 PRP2 0x0 00:19:25.963 [2024-12-08 18:36:37.036792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.963 [2024-12-08 18:36:37.036805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.963 [2024-12-08 18:36:37.036813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.963 [2024-12-08 18:36:37.036823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115616 len:8 PRP1 0x0 PRP2 0x0 00:19:25.963 [2024-12-08 18:36:37.036840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.963 [2024-12-08 18:36:37.036894] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8b7d30 was disconnected and freed. reset controller. 00:19:25.963 [2024-12-08 18:36:37.036912] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:19:25.963 [2024-12-08 18:36:37.036965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.963 [2024-12-08 18:36:37.036987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.963 [2024-12-08 18:36:37.037007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.963 [2024-12-08 18:36:37.037020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.963 [2024-12-08 18:36:37.037033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.963 [2024-12-08 18:36:37.037045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.963 [2024-12-08 18:36:37.037058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.963 [2024-12-08 18:36:37.037070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.963 [2024-12-08 18:36:37.037082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:25.963 [2024-12-08 18:36:37.037140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b3f10 (9): Bad file descriptor 00:19:25.963 [2024-12-08 18:36:37.040455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:25.963 [2024-12-08 18:36:37.072916] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:25.963 10098.70 IOPS, 39.45 MiB/s [2024-12-08T18:36:43.893Z] 10071.09 IOPS, 39.34 MiB/s [2024-12-08T18:36:43.893Z] 10046.25 IOPS, 39.24 MiB/s [2024-12-08T18:36:43.893Z] 10030.00 IOPS, 39.18 MiB/s [2024-12-08T18:36:43.893Z] 10015.14 IOPS, 39.12 MiB/s [2024-12-08T18:36:43.893Z] 9997.73 IOPS, 39.05 MiB/s 00:19:25.963 Latency(us) 00:19:25.963 [2024-12-08T18:36:43.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.963 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:25.963 Verification LBA range: start 0x0 length 0x4000 00:19:25.963 NVMe0n1 : 15.01 9998.07 39.05 239.63 0.00 12477.41 562.27 16324.42 00:19:25.963 [2024-12-08T18:36:43.893Z] =================================================================================================================== 00:19:25.963 [2024-12-08T18:36:43.893Z] Total : 9998.07 39.05 239.63 0.00 12477.41 562.27 16324.42 00:19:25.963 Received shutdown signal, test time was about 15.000000 seconds 00:19:25.963 00:19:25.963 Latency(us) 00:19:25.963 [2024-12-08T18:36:43.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.963 [2024-12-08T18:36:43.893Z] =================================================================================================================== 00:19:25.963 [2024-12-08T18:36:43.893Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:25.963 18:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:19:25.963 18:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:19:25.963 18:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:19:25.963 18:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=90036 00:19:25.963 18:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 90036 /var/tmp/bdevperf.sock 00:19:25.963 18:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:19:25.963 18:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 90036 ']' 00:19:25.963 18:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:25.963 18:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:25.963 18:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:25.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:25.963 18:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:25.963 18:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:26.223 18:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:26.223 18:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:26.223 18:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:26.483 [2024-12-08 18:36:44.398848] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:26.742 18:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:26.742 [2024-12-08 18:36:44.627105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:26.742 18:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:27.311 NVMe0n1 00:19:27.311 18:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:27.570 00:19:27.570 18:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:27.828 00:19:27.828 18:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:27.828 18:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:19:28.087 18:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:28.346 18:36:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:19:31.636 18:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:19:31.636 18:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:31.636 18:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=90114 00:19:31.636 18:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:31.636 18:36:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 90114 00:19:33.015 { 00:19:33.015 "results": [ 00:19:33.015 { 00:19:33.015 "job": "NVMe0n1", 00:19:33.015 "core_mask": "0x1", 00:19:33.015 "workload": "verify", 00:19:33.015 "status": "finished", 00:19:33.015 "verify_range": { 00:19:33.015 "start": 0, 00:19:33.015 "length": 16384 00:19:33.015 }, 00:19:33.015 "queue_depth": 128, 00:19:33.015 "io_size": 4096, 00:19:33.015 "runtime": 1.013181, 00:19:33.015 "iops": 7550.477160546832, 00:19:33.015 "mibps": 29.494051408386063, 00:19:33.015 "io_failed": 0, 00:19:33.015 "io_timeout": 0, 00:19:33.015 "avg_latency_us": 16889.400645513964, 00:19:33.015 "min_latency_us": 1869.2654545454545, 00:19:33.015 "max_latency_us": 16681.890909090907 00:19:33.015 } 00:19:33.015 ], 00:19:33.015 "core_count": 1 00:19:33.015 } 00:19:33.015 18:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:33.015 [2024-12-08 18:36:43.149897] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:33.015 [2024-12-08 18:36:43.150058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90036 ] 00:19:33.015 [2024-12-08 18:36:43.281512] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.015 [2024-12-08 18:36:43.349469] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.015 [2024-12-08 18:36:43.402957] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:33.015 [2024-12-08 18:36:46.205942] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:33.015 [2024-12-08 18:36:46.206413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.015 [2024-12-08 18:36:46.206568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.015 [2024-12-08 18:36:46.206654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.015 [2024-12-08 18:36:46.206732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.015 [2024-12-08 18:36:46.206818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.015 [2024-12-08 18:36:46.206889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.015 [2024-12-08 18:36:46.206954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.015 [2024-12-08 18:36:46.207030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.016 [2024-12-08 18:36:46.207097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:33.016 [2024-12-08 18:36:46.207219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:33.016 [2024-12-08 18:36:46.207321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a42f10 (9): Bad file descriptor 00:19:33.016 [2024-12-08 18:36:46.210514] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:33.016 Running I/O for 1 seconds... 00:19:33.016 7521.00 IOPS, 29.38 MiB/s 00:19:33.016 Latency(us) 00:19:33.016 [2024-12-08T18:36:50.946Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.016 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:33.016 Verification LBA range: start 0x0 length 0x4000 00:19:33.016 NVMe0n1 : 1.01 7550.48 29.49 0.00 0.00 16889.40 1869.27 16681.89 00:19:33.016 [2024-12-08T18:36:50.946Z] =================================================================================================================== 00:19:33.016 [2024-12-08T18:36:50.946Z] Total : 7550.48 29.49 0.00 0.00 16889.40 1869.27 16681.89 00:19:33.016 18:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:33.016 18:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:19:33.016 18:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:33.585 18:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:19:33.585 18:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:33.845 18:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:33.845 18:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:19:37.135 18:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:37.135 18:36:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:19:37.135 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 90036 00:19:37.135 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 90036 ']' 00:19:37.135 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 90036 00:19:37.135 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:19:37.135 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:37.135 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90036 00:19:37.393 killing process with pid 90036 00:19:37.393 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:37.393 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:37.393 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90036' 00:19:37.393 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 90036 00:19:37.393 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 90036 00:19:37.393 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:19:37.393 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:37.959 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:19:37.959 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:37.959 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:19:37.959 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:37.959 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:19:37.959 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:37.959 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:19:37.959 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:37.959 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:37.959 rmmod nvme_tcp 00:19:37.959 rmmod nvme_fabrics 00:19:37.959 rmmod nvme_keyring 00:19:37.959 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:37.959 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:19:37.959 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:19:37.959 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 89784 ']' 00:19:37.959 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 89784 00:19:37.959 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 89784 ']' 00:19:37.959 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 89784 00:19:37.959 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:19:37.959 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:37.959 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89784 00:19:37.959 killing process with pid 89784 00:19:37.959 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:37.959 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:37.959 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89784' 00:19:37.959 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 89784 00:19:37.959 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 89784 00:19:38.218 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:38.218 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:38.218 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:38.218 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:19:38.218 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:19:38.218 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:38.218 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:19:38.218 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:38.218 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:38.218 18:36:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:38.218 18:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:38.218 18:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:38.218 18:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:38.218 18:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:38.218 18:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:38.218 18:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:38.218 18:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:38.218 18:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:38.218 18:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:38.218 18:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:38.477 18:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:38.477 18:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:38.477 18:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:38.477 18:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.477 18:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:38.477 18:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.477 18:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:19:38.477 00:19:38.477 real 0m33.118s 00:19:38.477 user 2m7.855s 00:19:38.477 sys 0m5.494s 00:19:38.477 18:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:38.477 18:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:38.477 ************************************ 00:19:38.477 END TEST nvmf_failover 00:19:38.477 ************************************ 00:19:38.477 18:36:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:38.477 18:36:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:38.477 18:36:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:38.477 18:36:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.477 ************************************ 00:19:38.477 START TEST nvmf_host_discovery 00:19:38.477 ************************************ 00:19:38.477 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:38.477 * Looking for test storage... 00:19:38.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:38.477 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:38.477 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:19:38.477 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:38.736 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:38.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.736 --rc genhtml_branch_coverage=1 00:19:38.736 --rc genhtml_function_coverage=1 00:19:38.736 --rc genhtml_legend=1 00:19:38.737 --rc geninfo_all_blocks=1 00:19:38.737 --rc geninfo_unexecuted_blocks=1 00:19:38.737 00:19:38.737 ' 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:38.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.737 --rc genhtml_branch_coverage=1 00:19:38.737 --rc genhtml_function_coverage=1 00:19:38.737 --rc genhtml_legend=1 00:19:38.737 --rc geninfo_all_blocks=1 00:19:38.737 --rc geninfo_unexecuted_blocks=1 00:19:38.737 00:19:38.737 ' 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:38.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.737 --rc genhtml_branch_coverage=1 00:19:38.737 --rc genhtml_function_coverage=1 00:19:38.737 --rc genhtml_legend=1 00:19:38.737 --rc geninfo_all_blocks=1 00:19:38.737 --rc geninfo_unexecuted_blocks=1 00:19:38.737 00:19:38.737 ' 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:38.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.737 --rc genhtml_branch_coverage=1 00:19:38.737 --rc genhtml_function_coverage=1 00:19:38.737 --rc genhtml_legend=1 00:19:38.737 --rc geninfo_all_blocks=1 00:19:38.737 --rc geninfo_unexecuted_blocks=1 00:19:38.737 00:19:38.737 ' 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:38.737 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@456 -- # nvmf_veth_init 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:38.737 Cannot find device "nvmf_init_br" 00:19:38.737 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:19:38.738 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:38.738 Cannot find device "nvmf_init_br2" 00:19:38.738 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:19:38.738 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:38.738 Cannot find device "nvmf_tgt_br" 00:19:38.738 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:19:38.738 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:38.738 Cannot find device "nvmf_tgt_br2" 00:19:38.738 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:19:38.738 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:38.738 Cannot find device "nvmf_init_br" 00:19:38.738 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:19:38.738 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:38.738 Cannot find device "nvmf_init_br2" 00:19:38.738 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:19:38.738 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:38.738 Cannot find device "nvmf_tgt_br" 00:19:38.738 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:19:38.738 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:38.738 Cannot find device "nvmf_tgt_br2" 00:19:38.738 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:19:38.738 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:38.738 Cannot find device "nvmf_br" 00:19:38.738 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:19:38.738 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:38.738 Cannot find device "nvmf_init_if" 00:19:38.738 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:19:38.738 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:38.738 Cannot find device "nvmf_init_if2" 00:19:38.738 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:19:38.738 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:38.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:38.738 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:19:38.738 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:38.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:38.738 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:19:38.738 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:38.738 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:38.997 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:38.997 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:19:38.997 00:19:38.997 --- 10.0.0.3 ping statistics --- 00:19:38.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.997 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:38.997 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:38.997 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:19:38.997 00:19:38.997 --- 10.0.0.4 ping statistics --- 00:19:38.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.997 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:38.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:38.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:19:38.997 00:19:38.997 --- 10.0.0.1 ping statistics --- 00:19:38.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.997 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:38.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:38.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:19:38.997 00:19:38.997 --- 10.0.0.2 ping statistics --- 00:19:38.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.997 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@457 -- # return 0 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # nvmfpid=90438 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # waitforlisten 90438 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 90438 ']' 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:38.997 18:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:39.256 [2024-12-08 18:36:56.960610] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:39.256 [2024-12-08 18:36:56.961213] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.256 [2024-12-08 18:36:57.099455] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.256 [2024-12-08 18:36:57.181435] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.256 [2024-12-08 18:36:57.181503] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.256 [2024-12-08 18:36:57.181518] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.256 [2024-12-08 18:36:57.181529] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.256 [2024-12-08 18:36:57.181539] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.256 [2024-12-08 18:36:57.181577] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.515 [2024-12-08 18:36:57.263608] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:39.515 [2024-12-08 18:36:57.387579] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:39.515 [2024-12-08 18:36:57.395739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:39.515 null0 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:39.515 null1 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=90468 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 90468 /tmp/host.sock 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 90468 ']' 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:39.515 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:39.515 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:39.774 [2024-12-08 18:36:57.483800] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:39.774 [2024-12-08 18:36:57.483895] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90468 ] 00:19:39.774 [2024-12-08 18:36:57.624019] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.774 [2024-12-08 18:36:57.688738] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.032 [2024-12-08 18:36:57.745766] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:40.032 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:40.032 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:19:40.032 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:40.033 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.292 18:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.292 [2024-12-08 18:36:58.183799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:40.292 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:19:40.552 18:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:19:41.120 [2024-12-08 18:36:58.830379] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:41.120 [2024-12-08 18:36:58.830416] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:41.120 [2024-12-08 18:36:58.830434] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:41.120 [2024-12-08 18:36:58.836427] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:19:41.120 [2024-12-08 18:36:58.893103] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:41.120 [2024-12-08 18:36:58.893134] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.690 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.950 [2024-12-08 18:36:59.764909] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:41.950 [2024-12-08 18:36:59.765738] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:41.950 [2024-12-08 18:36:59.765776] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:41.950 [2024-12-08 18:36:59.771750] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:41.950 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:41.951 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:41.951 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:41.951 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:41.951 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:41.951 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:41.951 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:41.951 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:41.951 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.951 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:41.951 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.951 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:41.951 [2024-12-08 18:36:59.834378] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:41.951 [2024-12-08 18:36:59.834402] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:41.951 [2024-12-08 18:36:59.834408] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:41.951 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.211 18:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.211 [2024-12-08 18:36:59.998092] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:42.211 [2024-12-08 18:36:59.998239] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:42.211 [2024-12-08 18:37:00.000994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.211 [2024-12-08 18:37:00.001029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.211 [2024-12-08 18:37:00.001041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.211 [2024-12-08 18:37:00.001048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.211 [2024-12-08 18:37:00.001056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.211 [2024-12-08 18:37:00.001063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.211 [2024-12-08 18:37:00.001071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.211 [2024-12-08 18:37:00.001078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.211 [2024-12-08 18:37:00.001085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab0480 is same with the state(6) to be set 00:19:42.211 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.211 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:42.211 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:42.211 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:42.211 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:42.211 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:42.212 [2024-12-08 18:37:00.004184] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:19:42.212 [2024-12-08 18:37:00.004211] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:42.212 [2024-12-08 18:37:00.004257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xab0480 (9): Bad file descriptor 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.212 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:42.472 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.731 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:19:42.732 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:19:42.732 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:42.732 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:42.732 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:42.732 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.732 18:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.670 [2024-12-08 18:37:01.420944] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:43.670 [2024-12-08 18:37:01.420965] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:43.670 [2024-12-08 18:37:01.420981] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:43.670 [2024-12-08 18:37:01.426972] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:19:43.670 [2024-12-08 18:37:01.487660] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:43.670 [2024-12-08 18:37:01.487868] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:43.670 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.670 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:43.670 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:19:43.670 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:43.670 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:43.670 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:43.670 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:43.670 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:43.670 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:43.670 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.670 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.670 request: 00:19:43.670 { 00:19:43.670 "name": "nvme", 00:19:43.670 "trtype": "tcp", 00:19:43.670 "traddr": "10.0.0.3", 00:19:43.670 "adrfam": "ipv4", 00:19:43.670 "trsvcid": "8009", 00:19:43.670 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:43.670 "wait_for_attach": true, 00:19:43.670 "method": "bdev_nvme_start_discovery", 00:19:43.670 "req_id": 1 00:19:43.670 } 00:19:43.670 Got JSON-RPC error response 00:19:43.670 response: 00:19:43.670 { 00:19:43.670 "code": -17, 00:19:43.670 "message": "File exists" 00:19:43.670 } 00:19:43.670 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:43.670 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:19:43.670 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:43.670 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:43.670 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:43.670 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:19:43.670 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:43.670 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:43.670 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:43.670 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.670 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:43.671 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.671 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.671 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:19:43.671 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:19:43.671 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:43.671 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:43.671 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.671 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:43.671 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.671 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:43.930 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.930 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:43.930 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:43.930 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:19:43.930 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:43.930 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:43.930 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:43.930 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:43.930 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:43.930 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:43.930 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.930 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.930 request: 00:19:43.930 { 00:19:43.930 "name": "nvme_second", 00:19:43.930 "trtype": "tcp", 00:19:43.930 "traddr": "10.0.0.3", 00:19:43.930 "adrfam": "ipv4", 00:19:43.930 "trsvcid": "8009", 00:19:43.930 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:43.930 "wait_for_attach": true, 00:19:43.930 "method": "bdev_nvme_start_discovery", 00:19:43.930 "req_id": 1 00:19:43.930 } 00:19:43.930 Got JSON-RPC error response 00:19:43.930 response: 00:19:43.930 { 00:19:43.930 "code": -17, 00:19:43.930 "message": "File exists" 00:19:43.930 } 00:19:43.930 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.931 18:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.870 [2024-12-08 18:37:02.752316] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:44.870 [2024-12-08 18:37:02.752522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa5db0 with addr=10.0.0.3, port=8010 00:19:44.870 [2024-12-08 18:37:02.752551] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:44.870 [2024-12-08 18:37:02.752561] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:44.870 [2024-12-08 18:37:02.752570] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:19:46.248 [2024-12-08 18:37:03.752305] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:46.248 [2024-12-08 18:37:03.752346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa5db0 with addr=10.0.0.3, port=8010 00:19:46.248 [2024-12-08 18:37:03.752362] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:46.248 [2024-12-08 18:37:03.752369] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:46.248 [2024-12-08 18:37:03.752376] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:19:47.185 [2024-12-08 18:37:04.752237] bdev_nvme.c:7205:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:19:47.185 request: 00:19:47.185 { 00:19:47.185 "name": "nvme_second", 00:19:47.185 "trtype": "tcp", 00:19:47.185 "traddr": "10.0.0.3", 00:19:47.185 "adrfam": "ipv4", 00:19:47.185 "trsvcid": "8010", 00:19:47.185 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:47.185 "wait_for_attach": false, 00:19:47.185 "attach_timeout_ms": 3000, 00:19:47.185 "method": "bdev_nvme_start_discovery", 00:19:47.185 "req_id": 1 00:19:47.185 } 00:19:47.185 Got JSON-RPC error response 00:19:47.185 response: 00:19:47.185 { 00:19:47.185 "code": -110, 00:19:47.185 "message": "Connection timed out" 00:19:47.185 } 00:19:47.185 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:47.185 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:19:47.185 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:47.185 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:47.185 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:47.185 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:19:47.185 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:47.185 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:47.185 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:47.185 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.185 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.186 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:47.186 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.186 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:19:47.186 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:19:47.186 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 90468 00:19:47.186 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:19:47.186 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:47.186 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:19:47.186 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:47.186 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:19:47.186 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:47.186 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:47.186 rmmod nvme_tcp 00:19:47.186 rmmod nvme_fabrics 00:19:47.186 rmmod nvme_keyring 00:19:47.186 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:47.186 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:19:47.186 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:19:47.186 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@513 -- # '[' -n 90438 ']' 00:19:47.186 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # killprocess 90438 00:19:47.186 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 90438 ']' 00:19:47.186 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 90438 00:19:47.186 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:19:47.186 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:47.186 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90438 00:19:47.186 killing process with pid 90438 00:19:47.186 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:47.186 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:47.186 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90438' 00:19:47.186 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 90438 00:19:47.186 18:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 90438 00:19:47.444 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:47.444 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:47.444 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:47.444 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:19:47.444 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-save 00:19:47.444 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:47.444 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:19:47.444 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:47.444 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:47.444 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:47.444 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:47.444 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:47.444 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:47.444 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:47.444 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:47.444 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:47.444 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:47.444 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:47.703 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:47.703 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:47.703 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:47.703 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:47.703 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:47.703 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.703 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:47.703 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.703 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:19:47.703 00:19:47.703 real 0m9.211s 00:19:47.703 user 0m17.191s 00:19:47.703 sys 0m2.126s 00:19:47.703 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:47.703 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.703 ************************************ 00:19:47.703 END TEST nvmf_host_discovery 00:19:47.703 ************************************ 00:19:47.703 18:37:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:47.703 18:37:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:47.703 18:37:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:47.703 18:37:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.703 ************************************ 00:19:47.703 START TEST nvmf_host_multipath_status 00:19:47.703 ************************************ 00:19:47.703 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:47.703 * Looking for test storage... 00:19:47.703 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:47.703 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:47.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.963 --rc genhtml_branch_coverage=1 00:19:47.963 --rc genhtml_function_coverage=1 00:19:47.963 --rc genhtml_legend=1 00:19:47.963 --rc geninfo_all_blocks=1 00:19:47.963 --rc geninfo_unexecuted_blocks=1 00:19:47.963 00:19:47.963 ' 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:47.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.963 --rc genhtml_branch_coverage=1 00:19:47.963 --rc genhtml_function_coverage=1 00:19:47.963 --rc genhtml_legend=1 00:19:47.963 --rc geninfo_all_blocks=1 00:19:47.963 --rc geninfo_unexecuted_blocks=1 00:19:47.963 00:19:47.963 ' 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:47.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.963 --rc genhtml_branch_coverage=1 00:19:47.963 --rc genhtml_function_coverage=1 00:19:47.963 --rc genhtml_legend=1 00:19:47.963 --rc geninfo_all_blocks=1 00:19:47.963 --rc geninfo_unexecuted_blocks=1 00:19:47.963 00:19:47.963 ' 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:47.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.963 --rc genhtml_branch_coverage=1 00:19:47.963 --rc genhtml_function_coverage=1 00:19:47.963 --rc genhtml_legend=1 00:19:47.963 --rc geninfo_all_blocks=1 00:19:47.963 --rc geninfo_unexecuted_blocks=1 00:19:47.963 00:19:47.963 ' 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:47.963 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # nvmf_veth_init 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:47.963 Cannot find device "nvmf_init_br" 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:19:47.963 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:47.964 Cannot find device "nvmf_init_br2" 00:19:47.964 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:19:47.964 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:47.964 Cannot find device "nvmf_tgt_br" 00:19:47.964 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:19:47.964 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:47.964 Cannot find device "nvmf_tgt_br2" 00:19:47.964 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:19:47.964 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:47.964 Cannot find device "nvmf_init_br" 00:19:47.964 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:19:47.964 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:47.964 Cannot find device "nvmf_init_br2" 00:19:47.964 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:19:47.964 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:47.964 Cannot find device "nvmf_tgt_br" 00:19:47.964 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:19:47.964 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:47.964 Cannot find device "nvmf_tgt_br2" 00:19:47.964 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:19:47.964 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:47.964 Cannot find device "nvmf_br" 00:19:47.964 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:19:47.964 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:48.222 Cannot find device "nvmf_init_if" 00:19:48.222 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:19:48.222 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:48.222 Cannot find device "nvmf_init_if2" 00:19:48.222 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:19:48.222 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:48.222 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:48.222 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:19:48.222 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:48.222 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:48.222 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:19:48.222 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:48.223 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:48.223 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:48.223 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:48.223 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:48.223 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:48.223 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:48.223 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:48.223 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:48.223 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:48.223 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:48.223 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:48.223 18:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:48.223 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:48.223 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:48.223 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:48.223 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:48.223 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:48.223 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:48.223 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:48.223 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:48.223 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:48.223 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:48.223 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:48.223 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:48.223 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:48.223 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:48.223 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:48.223 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:48.223 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:48.223 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:48.223 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:48.223 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:48.223 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:48.223 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:19:48.223 00:19:48.223 --- 10.0.0.3 ping statistics --- 00:19:48.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.223 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:19:48.223 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:48.223 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:48.223 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:19:48.223 00:19:48.223 --- 10.0.0.4 ping statistics --- 00:19:48.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.223 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:19:48.223 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:48.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:48.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:19:48.482 00:19:48.482 --- 10.0.0.1 ping statistics --- 00:19:48.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.482 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:19:48.482 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:48.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:48.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:19:48.482 00:19:48.482 --- 10.0.0.2 ping statistics --- 00:19:48.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.482 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:19:48.482 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:48.482 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # return 0 00:19:48.482 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:48.482 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:48.482 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:48.482 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:48.482 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:48.482 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:48.482 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:48.482 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:19:48.482 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:48.482 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:48.482 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:48.482 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # nvmfpid=90964 00:19:48.482 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:48.482 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # waitforlisten 90964 00:19:48.482 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 90964 ']' 00:19:48.482 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.482 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:48.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.482 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.482 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:48.482 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:48.482 [2024-12-08 18:37:06.248914] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:48.482 [2024-12-08 18:37:06.249007] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.482 [2024-12-08 18:37:06.384794] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:48.741 [2024-12-08 18:37:06.446487] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.741 [2024-12-08 18:37:06.446536] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.741 [2024-12-08 18:37:06.446546] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.741 [2024-12-08 18:37:06.446554] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.741 [2024-12-08 18:37:06.446561] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.741 [2024-12-08 18:37:06.446719] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.741 [2024-12-08 18:37:06.446727] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.741 [2024-12-08 18:37:06.501670] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:48.741 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:48.741 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:19:48.741 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:48.741 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:48.741 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:48.741 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.741 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=90964 00:19:48.741 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:49.000 [2024-12-08 18:37:06.896364] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.000 18:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:49.564 Malloc0 00:19:49.564 18:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:49.823 18:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:50.081 18:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:50.339 [2024-12-08 18:37:08.017060] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:50.339 18:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:50.598 [2024-12-08 18:37:08.309387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:50.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.598 18:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=91012 00:19:50.598 18:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:50.598 18:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:50.598 18:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 91012 /var/tmp/bdevperf.sock 00:19:50.598 18:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 91012 ']' 00:19:50.598 18:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.598 18:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:50.598 18:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.598 18:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:50.598 18:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:50.857 18:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:50.857 18:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:19:50.857 18:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:51.115 18:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:19:51.374 Nvme0n1 00:19:51.374 18:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:51.633 Nvme0n1 00:19:51.633 18:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:19:51.633 18:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:54.178 18:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:19:54.178 18:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:19:54.178 18:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:54.178 18:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:19:55.109 18:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:19:55.110 18:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:55.110 18:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.110 18:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:55.367 18:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.367 18:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:55.367 18:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.367 18:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:55.624 18:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:55.624 18:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:55.624 18:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:55.624 18:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.881 18:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.881 18:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:55.881 18:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.881 18:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:56.138 18:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:56.138 18:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:56.138 18:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:56.138 18:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:56.394 18:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:56.394 18:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:56.394 18:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:56.395 18:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:56.651 18:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:56.651 18:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:19:56.651 18:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:56.908 18:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:57.172 18:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:19:58.163 18:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:19:58.163 18:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:58.163 18:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.163 18:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:58.434 18:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:58.434 18:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:58.434 18:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.434 18:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:58.693 18:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.693 18:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:58.693 18:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.693 18:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:58.952 18:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.952 18:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:58.952 18:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.952 18:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:59.211 18:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:59.211 18:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:59.211 18:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:59.211 18:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:59.470 18:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:59.470 18:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:59.470 18:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:59.470 18:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:59.729 18:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:59.729 18:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:19:59.729 18:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:59.988 18:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:20:00.246 18:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:20:01.183 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:20:01.183 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:01.183 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.183 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:01.442 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:01.442 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:01.442 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.442 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:01.701 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:01.701 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:01.701 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:01.701 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.960 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:01.960 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:01.960 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.960 18:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:02.218 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:02.218 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:02.218 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:02.218 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:02.478 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:02.478 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:02.478 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:02.478 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:02.737 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:02.737 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:20:02.737 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:02.996 18:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:03.254 18:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:20:04.187 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:20:04.187 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:04.187 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.187 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:04.446 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:04.446 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:04.446 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.446 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:04.704 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:04.704 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:04.704 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.704 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:04.963 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:04.963 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:04.963 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.963 18:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:05.222 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:05.222 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:05.222 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:05.222 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:05.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:05.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:05.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:05.481 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:05.739 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:05.739 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:20:05.739 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:05.999 18:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:06.257 18:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:20:07.631 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:20:07.631 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:07.631 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:07.631 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:07.631 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:07.631 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:07.631 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:07.631 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:07.631 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:07.631 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:07.631 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:07.631 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:07.889 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:07.889 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:07.889 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:07.889 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:08.148 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:08.148 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:08.148 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:08.148 18:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:08.406 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:08.406 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:08.406 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:08.406 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:08.664 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:08.664 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:20:08.664 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:08.923 18:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:09.181 18:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:20:10.562 18:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:20:10.562 18:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:10.562 18:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:10.562 18:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:10.562 18:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:10.562 18:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:10.562 18:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:10.562 18:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:10.821 18:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:10.821 18:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:10.821 18:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:10.821 18:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:11.080 18:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:11.080 18:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:11.080 18:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:11.080 18:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:11.338 18:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:11.338 18:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:11.338 18:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:11.338 18:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:11.597 18:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:11.597 18:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:11.597 18:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:11.597 18:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:11.855 18:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:11.855 18:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:20:12.114 18:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:20:12.114 18:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:20:12.372 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:12.631 18:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:20:13.567 18:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:20:13.567 18:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:13.567 18:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:13.567 18:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:13.825 18:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:13.825 18:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:13.825 18:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:13.825 18:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:14.084 18:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:14.084 18:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:14.084 18:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:14.084 18:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:14.342 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:14.342 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:14.342 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:14.342 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:14.600 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:14.600 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:14.600 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:14.600 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:14.859 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:14.859 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:14.859 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:14.859 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:15.118 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:15.118 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:20:15.118 18:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:15.118 18:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:15.376 18:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:20:16.353 18:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:20:16.353 18:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:16.353 18:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.353 18:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:16.610 18:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:16.610 18:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:16.610 18:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.610 18:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:17.177 18:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:17.177 18:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:17.177 18:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:17.177 18:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:17.177 18:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:17.177 18:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:17.177 18:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:17.177 18:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:17.435 18:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:17.435 18:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:17.435 18:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:17.435 18:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:17.693 18:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:17.693 18:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:17.693 18:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:17.693 18:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:17.951 18:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:17.951 18:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:20:17.951 18:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:18.210 18:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:20:18.469 18:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:20:19.403 18:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:20:19.403 18:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:19.403 18:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.403 18:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:19.971 18:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.971 18:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:19.971 18:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.971 18:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:19.971 18:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.971 18:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:19.971 18:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.971 18:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:20.229 18:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:20.229 18:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:20.229 18:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:20.230 18:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:20.488 18:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:20.488 18:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:20.488 18:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:20.488 18:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:20.747 18:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:20.747 18:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:20.747 18:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:20.747 18:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:21.005 18:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:21.005 18:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:20:21.005 18:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:21.264 18:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:21.523 18:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:20:22.459 18:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:20:22.459 18:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:22.459 18:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:22.459 18:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:22.718 18:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:22.718 18:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:22.718 18:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:22.718 18:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:22.997 18:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:22.997 18:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:22.997 18:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:22.997 18:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:23.310 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:23.310 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:23.310 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:23.310 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:23.580 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:23.580 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:23.580 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:23.580 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:23.838 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:23.838 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:23.838 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:23.838 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:24.097 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:24.097 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 91012 00:20:24.097 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 91012 ']' 00:20:24.097 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 91012 00:20:24.097 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:20:24.097 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:24.097 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91012 00:20:24.097 killing process with pid 91012 00:20:24.097 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:24.097 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:24.097 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91012' 00:20:24.097 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 91012 00:20:24.097 18:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 91012 00:20:24.097 { 00:20:24.097 "results": [ 00:20:24.097 { 00:20:24.097 "job": "Nvme0n1", 00:20:24.097 "core_mask": "0x4", 00:20:24.097 "workload": "verify", 00:20:24.097 "status": "terminated", 00:20:24.097 "verify_range": { 00:20:24.097 "start": 0, 00:20:24.097 "length": 16384 00:20:24.097 }, 00:20:24.097 "queue_depth": 128, 00:20:24.097 "io_size": 4096, 00:20:24.097 "runtime": 32.282254, 00:20:24.097 "iops": 9815.609529619585, 00:20:24.097 "mibps": 38.342224725076505, 00:20:24.097 "io_failed": 0, 00:20:24.097 "io_timeout": 0, 00:20:24.097 "avg_latency_us": 13014.75933547741, 00:20:24.097 "min_latency_us": 99.60727272727273, 00:20:24.097 "max_latency_us": 4026531.84 00:20:24.097 } 00:20:24.097 ], 00:20:24.097 "core_count": 1 00:20:24.097 } 00:20:24.359 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 91012 00:20:24.359 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:24.359 [2024-12-08 18:37:08.370778] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:20:24.359 [2024-12-08 18:37:08.370865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91012 ] 00:20:24.359 [2024-12-08 18:37:08.505254] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.359 [2024-12-08 18:37:08.569363] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.359 [2024-12-08 18:37:08.621713] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:24.359 [2024-12-08 18:37:09.448754] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:20:24.359 Running I/O for 90 seconds... 00:20:24.359 9303.00 IOPS, 36.34 MiB/s [2024-12-08T18:37:42.289Z] 9739.50 IOPS, 38.04 MiB/s [2024-12-08T18:37:42.289Z] 9879.67 IOPS, 38.59 MiB/s [2024-12-08T18:37:42.289Z] 9877.75 IOPS, 38.58 MiB/s [2024-12-08T18:37:42.289Z] 9864.60 IOPS, 38.53 MiB/s [2024-12-08T18:37:42.289Z] 9887.67 IOPS, 38.62 MiB/s [2024-12-08T18:37:42.289Z] 9911.71 IOPS, 38.72 MiB/s [2024-12-08T18:37:42.289Z] 9932.62 IOPS, 38.80 MiB/s [2024-12-08T18:37:42.289Z] 9960.67 IOPS, 38.91 MiB/s [2024-12-08T18:37:42.289Z] 9963.00 IOPS, 38.92 MiB/s [2024-12-08T18:37:42.289Z] 10044.09 IOPS, 39.23 MiB/s [2024-12-08T18:37:42.289Z] 10167.08 IOPS, 39.72 MiB/s [2024-12-08T18:37:42.289Z] 10275.46 IOPS, 40.14 MiB/s [2024-12-08T18:37:42.289Z] 10366.64 IOPS, 40.49 MiB/s [2024-12-08T18:37:42.290Z] [2024-12-08 18:37:23.785413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.360 [2024-12-08 18:37:23.785468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.785538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.360 [2024-12-08 18:37:23.785565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.785587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.360 [2024-12-08 18:37:23.785601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.785619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.360 [2024-12-08 18:37:23.785632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.785650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.360 [2024-12-08 18:37:23.785664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.785682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.360 [2024-12-08 18:37:23.785695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.785712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.360 [2024-12-08 18:37:23.785726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.785744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.360 [2024-12-08 18:37:23.785757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.785774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.360 [2024-12-08 18:37:23.785823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.785882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.360 [2024-12-08 18:37:23.785898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.785917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.360 [2024-12-08 18:37:23.785931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.785950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.360 [2024-12-08 18:37:23.785965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.785984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.360 [2024-12-08 18:37:23.785998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.786017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.360 [2024-12-08 18:37:23.786030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.786049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.360 [2024-12-08 18:37:23.786063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.786082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:130976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.360 [2024-12-08 18:37:23.786097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.786116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.360 [2024-12-08 18:37:23.786130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.786149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.360 [2024-12-08 18:37:23.786163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.786181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.360 [2024-12-08 18:37:23.786195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.786228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.360 [2024-12-08 18:37:23.786241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.786276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.360 [2024-12-08 18:37:23.786299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.786318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.360 [2024-12-08 18:37:23.786333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.786351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:131032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.360 [2024-12-08 18:37:23.786365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.786382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:131040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.360 [2024-12-08 18:37:23.786396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.786426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.360 [2024-12-08 18:37:23.786440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.786459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.360 [2024-12-08 18:37:23.786487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.786512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.360 [2024-12-08 18:37:23.786528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.786548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.360 [2024-12-08 18:37:23.786562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.786598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.360 [2024-12-08 18:37:23.786612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:24.360 [2024-12-08 18:37:23.786631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.361 [2024-12-08 18:37:23.786645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.786664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.361 [2024-12-08 18:37:23.786678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.786698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.361 [2024-12-08 18:37:23.786712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.786738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.361 [2024-12-08 18:37:23.786753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.786783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.361 [2024-12-08 18:37:23.786799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.786833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.361 [2024-12-08 18:37:23.786847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.786866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.361 [2024-12-08 18:37:23.786880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.786899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.361 [2024-12-08 18:37:23.786912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.786931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.361 [2024-12-08 18:37:23.786945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.786964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.361 [2024-12-08 18:37:23.786978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.786997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.361 [2024-12-08 18:37:23.787010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.787029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.361 [2024-12-08 18:37:23.787043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.787062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.361 [2024-12-08 18:37:23.787076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.787094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.361 [2024-12-08 18:37:23.787109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.787128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.361 [2024-12-08 18:37:23.787142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.787160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.361 [2024-12-08 18:37:23.787174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.787200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.361 [2024-12-08 18:37:23.787215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.787234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.361 [2024-12-08 18:37:23.787248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.787267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.361 [2024-12-08 18:37:23.787280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.787299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.361 [2024-12-08 18:37:23.787313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.787331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.361 [2024-12-08 18:37:23.787345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.787364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.361 [2024-12-08 18:37:23.787378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.787397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.361 [2024-12-08 18:37:23.787427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.787474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.361 [2024-12-08 18:37:23.787492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.787513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.361 [2024-12-08 18:37:23.787528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.787548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.361 [2024-12-08 18:37:23.787563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.787582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.361 [2024-12-08 18:37:23.787597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.787617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.361 [2024-12-08 18:37:23.787631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.787650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.361 [2024-12-08 18:37:23.787674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.787696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.361 [2024-12-08 18:37:23.787713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:24.361 [2024-12-08 18:37:23.787733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.362 [2024-12-08 18:37:23.787747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.787767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.362 [2024-12-08 18:37:23.787791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.787814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.362 [2024-12-08 18:37:23.787830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.787850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.362 [2024-12-08 18:37:23.787865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.787884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.362 [2024-12-08 18:37:23.787899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.787919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.362 [2024-12-08 18:37:23.787933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.787953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.362 [2024-12-08 18:37:23.787968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.787988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.362 [2024-12-08 18:37:23.788002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.788022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.362 [2024-12-08 18:37:23.788036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.788087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.362 [2024-12-08 18:37:23.788101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.788126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.362 [2024-12-08 18:37:23.788147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.788167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.362 [2024-12-08 18:37:23.788182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.788201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.362 [2024-12-08 18:37:23.788215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.788238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.362 [2024-12-08 18:37:23.788253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.788272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.362 [2024-12-08 18:37:23.788286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.788305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.362 [2024-12-08 18:37:23.788319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.788338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.362 [2024-12-08 18:37:23.788352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.788371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.362 [2024-12-08 18:37:23.788385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.788404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.362 [2024-12-08 18:37:23.788434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.788464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.362 [2024-12-08 18:37:23.788498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.788518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.362 [2024-12-08 18:37:23.788533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.788553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.362 [2024-12-08 18:37:23.788567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.788587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.362 [2024-12-08 18:37:23.788601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.788630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.362 [2024-12-08 18:37:23.788645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.788666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.362 [2024-12-08 18:37:23.788681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.788700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.362 [2024-12-08 18:37:23.788715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.788734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.362 [2024-12-08 18:37:23.788749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.788768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.362 [2024-12-08 18:37:23.788783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.788802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.362 [2024-12-08 18:37:23.788817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:24.362 [2024-12-08 18:37:23.788836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.363 [2024-12-08 18:37:23.788881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.788899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.363 [2024-12-08 18:37:23.788914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.788933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.363 [2024-12-08 18:37:23.788948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.788967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.363 [2024-12-08 18:37:23.788980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.788999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.363 [2024-12-08 18:37:23.789013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.789031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.363 [2024-12-08 18:37:23.789045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.789064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.363 [2024-12-08 18:37:23.789084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.789104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.363 [2024-12-08 18:37:23.789118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.789137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.363 [2024-12-08 18:37:23.789151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.789170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.363 [2024-12-08 18:37:23.789184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.789202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.363 [2024-12-08 18:37:23.789216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.789235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.363 [2024-12-08 18:37:23.789248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.789267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.363 [2024-12-08 18:37:23.789280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.789299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.363 [2024-12-08 18:37:23.789313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.789332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.363 [2024-12-08 18:37:23.789346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.789365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.363 [2024-12-08 18:37:23.789378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.789397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.363 [2024-12-08 18:37:23.789427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.789446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.363 [2024-12-08 18:37:23.789479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.789505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.363 [2024-12-08 18:37:23.789528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.790255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.363 [2024-12-08 18:37:23.790284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.790315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.363 [2024-12-08 18:37:23.790331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.790355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.363 [2024-12-08 18:37:23.790370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.790395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.363 [2024-12-08 18:37:23.790409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.790467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.363 [2024-12-08 18:37:23.790483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.790508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.363 [2024-12-08 18:37:23.790523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.790548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.363 [2024-12-08 18:37:23.790563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.790588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.363 [2024-12-08 18:37:23.790603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.790643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.363 [2024-12-08 18:37:23.790662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.790688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.363 [2024-12-08 18:37:23.790704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.790729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.363 [2024-12-08 18:37:23.790743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:24.363 [2024-12-08 18:37:23.790769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.363 [2024-12-08 18:37:23.790784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:24.364 [2024-12-08 18:37:23.790839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.364 [2024-12-08 18:37:23.790855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:24.364 [2024-12-08 18:37:23.790879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.364 [2024-12-08 18:37:23.790893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:24.364 [2024-12-08 18:37:23.790918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.364 [2024-12-08 18:37:23.790937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.364 [2024-12-08 18:37:23.790962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.364 [2024-12-08 18:37:23.790976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:24.364 [2024-12-08 18:37:23.791000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.364 [2024-12-08 18:37:23.791015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:24.364 [2024-12-08 18:37:23.791039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.364 [2024-12-08 18:37:23.791053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:24.364 [2024-12-08 18:37:23.791077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.364 [2024-12-08 18:37:23.791090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:24.364 [2024-12-08 18:37:23.791114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.364 [2024-12-08 18:37:23.791128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:24.364 [2024-12-08 18:37:23.791152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.364 [2024-12-08 18:37:23.791167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:24.364 9829.13 IOPS, 38.40 MiB/s [2024-12-08T18:37:42.294Z] 9214.81 IOPS, 36.00 MiB/s [2024-12-08T18:37:42.294Z] 8672.76 IOPS, 33.88 MiB/s [2024-12-08T18:37:42.294Z] 8190.94 IOPS, 32.00 MiB/s [2024-12-08T18:37:42.294Z] 8239.89 IOPS, 32.19 MiB/s [2024-12-08T18:37:42.294Z] 8399.50 IOPS, 32.81 MiB/s [2024-12-08T18:37:42.294Z] 8630.19 IOPS, 33.71 MiB/s [2024-12-08T18:37:42.294Z] 8886.27 IOPS, 34.71 MiB/s [2024-12-08T18:37:42.294Z] 9149.83 IOPS, 35.74 MiB/s [2024-12-08T18:37:42.294Z] 9302.62 IOPS, 36.34 MiB/s [2024-12-08T18:37:42.294Z] 9390.48 IOPS, 36.68 MiB/s [2024-12-08T18:37:42.294Z] 9472.08 IOPS, 37.00 MiB/s [2024-12-08T18:37:42.294Z] 9584.04 IOPS, 37.44 MiB/s [2024-12-08T18:37:42.294Z] 9789.39 IOPS, 38.24 MiB/s [2024-12-08T18:37:42.294Z] 9810.34 IOPS, 38.32 MiB/s [2024-12-08T18:37:42.294Z] [2024-12-08 18:37:39.283933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:105616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.364 [2024-12-08 18:37:39.283992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:24.364 [2024-12-08 18:37:39.284053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:105632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.364 [2024-12-08 18:37:39.284089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.364 [2024-12-08 18:37:39.284112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:105648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.364 [2024-12-08 18:37:39.284127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:24.364 [2024-12-08 18:37:39.284145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.364 [2024-12-08 18:37:39.284158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:24.364 [2024-12-08 18:37:39.284176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.364 [2024-12-08 18:37:39.284189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:24.364 [2024-12-08 18:37:39.284207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:105696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.364 [2024-12-08 18:37:39.284220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:24.364 [2024-12-08 18:37:39.284238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:105712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.364 [2024-12-08 18:37:39.284251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:24.364 [2024-12-08 18:37:39.284269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:105728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.364 [2024-12-08 18:37:39.284282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:24.364 [2024-12-08 18:37:39.284300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.364 [2024-12-08 18:37:39.284313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:24.364 [2024-12-08 18:37:39.284332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.364 [2024-12-08 18:37:39.284345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:24.364 [2024-12-08 18:37:39.284363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:105312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.364 [2024-12-08 18:37:39.284376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:24.364 [2024-12-08 18:37:39.284395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:105344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.364 [2024-12-08 18:37:39.284408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:24.364 [2024-12-08 18:37:39.284455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:105744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.364 [2024-12-08 18:37:39.284474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:24.364 [2024-12-08 18:37:39.284493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.364 [2024-12-08 18:37:39.284506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:24.364 [2024-12-08 18:37:39.284535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:105776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.364 [2024-12-08 18:37:39.284554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:24.364 [2024-12-08 18:37:39.284573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:105792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.364 [2024-12-08 18:37:39.284586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:24.364 [2024-12-08 18:37:39.284604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.364 [2024-12-08 18:37:39.284617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:24.365 [2024-12-08 18:37:39.284636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.365 [2024-12-08 18:37:39.284650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:24.365 [2024-12-08 18:37:39.284667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.365 [2024-12-08 18:37:39.284680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:24.365 [2024-12-08 18:37:39.284698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.365 [2024-12-08 18:37:39.284711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:24.365 [2024-12-08 18:37:39.284729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.365 [2024-12-08 18:37:39.284742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:24.365 [2024-12-08 18:37:39.284761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:105272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.365 [2024-12-08 18:37:39.284774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:24.365 [2024-12-08 18:37:39.284792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:105304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.365 [2024-12-08 18:37:39.284805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:24.365 [2024-12-08 18:37:39.284823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.365 [2024-12-08 18:37:39.284837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:24.365 [2024-12-08 18:37:39.285900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.365 [2024-12-08 18:37:39.285926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:24.365 [2024-12-08 18:37:39.285950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.365 [2024-12-08 18:37:39.285965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:24.365 [2024-12-08 18:37:39.285995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.365 [2024-12-08 18:37:39.286011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:24.365 [2024-12-08 18:37:39.286029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.365 [2024-12-08 18:37:39.286042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:24.365 [2024-12-08 18:37:39.286060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.365 [2024-12-08 18:37:39.286073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:24.365 [2024-12-08 18:37:39.286092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:105408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.365 [2024-12-08 18:37:39.286105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:24.365 [2024-12-08 18:37:39.286123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.365 [2024-12-08 18:37:39.286137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:24.365 [2024-12-08 18:37:39.286156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:105472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.365 [2024-12-08 18:37:39.286170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:24.365 [2024-12-08 18:37:39.286188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.365 [2024-12-08 18:37:39.286202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:24.365 [2024-12-08 18:37:39.286220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.365 [2024-12-08 18:37:39.286234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.365 [2024-12-08 18:37:39.286252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:105968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.365 [2024-12-08 18:37:39.286265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:24.365 [2024-12-08 18:37:39.286283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.365 [2024-12-08 18:37:39.286297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:24.365 [2024-12-08 18:37:39.286315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.365 [2024-12-08 18:37:39.286328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:24.365 [2024-12-08 18:37:39.286346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:105400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.365 [2024-12-08 18:37:39.286359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:24.365 [2024-12-08 18:37:39.286377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.365 [2024-12-08 18:37:39.286398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:24.365 [2024-12-08 18:37:39.286433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.366 [2024-12-08 18:37:39.286448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:24.366 [2024-12-08 18:37:39.286482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.366 [2024-12-08 18:37:39.286499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:24.366 [2024-12-08 18:37:39.286518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.366 [2024-12-08 18:37:39.286532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:24.366 [2024-12-08 18:37:39.286550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:106032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.366 [2024-12-08 18:37:39.286563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:24.366 [2024-12-08 18:37:39.286582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.366 [2024-12-08 18:37:39.286595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:24.366 [2024-12-08 18:37:39.286613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:105504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.366 [2024-12-08 18:37:39.286626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:24.366 [2024-12-08 18:37:39.286644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.366 [2024-12-08 18:37:39.286657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:24.366 [2024-12-08 18:37:39.286675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.366 [2024-12-08 18:37:39.286689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:24.366 [2024-12-08 18:37:39.286707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.366 [2024-12-08 18:37:39.286720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:24.366 [2024-12-08 18:37:39.286738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.366 [2024-12-08 18:37:39.286752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:24.366 [2024-12-08 18:37:39.286770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:106080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.366 [2024-12-08 18:37:39.286783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:24.366 [2024-12-08 18:37:39.286801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:106096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.366 [2024-12-08 18:37:39.286823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:24.366 [2024-12-08 18:37:39.286842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.366 [2024-12-08 18:37:39.286856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:24.366 [2024-12-08 18:37:39.286874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.366 [2024-12-08 18:37:39.286887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:24.366 [2024-12-08 18:37:39.286905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.366 [2024-12-08 18:37:39.286918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:24.366 [2024-12-08 18:37:39.286936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.366 [2024-12-08 18:37:39.286950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:24.366 [2024-12-08 18:37:39.286969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:105592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.366 [2024-12-08 18:37:39.286982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:24.366 [2024-12-08 18:37:39.287004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:106128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.366 [2024-12-08 18:37:39.287018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:24.366 [2024-12-08 18:37:39.287036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:106144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.366 [2024-12-08 18:37:39.287050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:24.366 [2024-12-08 18:37:39.287068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:106160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.366 [2024-12-08 18:37:39.287081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:24.366 [2024-12-08 18:37:39.287100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:106176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:24.366 [2024-12-08 18:37:39.287113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:24.366 9827.50 IOPS, 38.39 MiB/s [2024-12-08T18:37:42.296Z] 9835.03 IOPS, 38.42 MiB/s [2024-12-08T18:37:42.296Z] 9828.44 IOPS, 38.39 MiB/s [2024-12-08T18:37:42.296Z] Received shutdown signal, test time was about 32.282970 seconds 00:20:24.366 00:20:24.366 Latency(us) 00:20:24.366 [2024-12-08T18:37:42.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.366 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:24.366 Verification LBA range: start 0x0 length 0x4000 00:20:24.366 Nvme0n1 : 32.28 9815.61 38.34 0.00 0.00 13014.76 99.61 4026531.84 00:20:24.366 [2024-12-08T18:37:42.296Z] =================================================================================================================== 00:20:24.366 [2024-12-08T18:37:42.296Z] Total : 9815.61 38.34 0.00 0.00 13014.76 99.61 4026531.84 00:20:24.366 [2024-12-08 18:37:41.867540] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:20:24.366 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:24.626 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:20:24.626 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:24.626 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:20:24.626 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:24.626 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:20:24.626 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:24.626 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:20:24.626 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:24.626 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:24.626 rmmod nvme_tcp 00:20:24.626 rmmod nvme_fabrics 00:20:24.626 rmmod nvme_keyring 00:20:24.626 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:24.626 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:20:24.626 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:20:24.626 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@513 -- # '[' -n 90964 ']' 00:20:24.626 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # killprocess 90964 00:20:24.626 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 90964 ']' 00:20:24.626 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 90964 00:20:24.626 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:20:24.626 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:24.626 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90964 00:20:24.626 killing process with pid 90964 00:20:24.626 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:24.626 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:24.626 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90964' 00:20:24.626 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 90964 00:20:24.626 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 90964 00:20:24.885 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:24.885 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:24.885 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:24.885 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:20:24.885 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-save 00:20:24.885 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:24.885 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-restore 00:20:24.885 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:24.885 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:24.885 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:24.885 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:24.885 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:24.885 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:24.885 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:24.885 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:24.885 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:24.885 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:24.885 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:24.885 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:25.144 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:25.144 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:25.144 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:25.144 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:25.144 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.144 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.144 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.144 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:20:25.144 00:20:25.144 real 0m37.375s 00:20:25.144 user 1m59.603s 00:20:25.144 sys 0m11.454s 00:20:25.144 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:25.144 18:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:25.144 ************************************ 00:20:25.144 END TEST nvmf_host_multipath_status 00:20:25.145 ************************************ 00:20:25.145 18:37:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:25.145 18:37:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:25.145 18:37:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:25.145 18:37:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.145 ************************************ 00:20:25.145 START TEST nvmf_discovery_remove_ifc 00:20:25.145 ************************************ 00:20:25.145 18:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:25.145 * Looking for test storage... 00:20:25.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:25.145 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:25.145 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:20:25.145 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:25.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.405 --rc genhtml_branch_coverage=1 00:20:25.405 --rc genhtml_function_coverage=1 00:20:25.405 --rc genhtml_legend=1 00:20:25.405 --rc geninfo_all_blocks=1 00:20:25.405 --rc geninfo_unexecuted_blocks=1 00:20:25.405 00:20:25.405 ' 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:25.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.405 --rc genhtml_branch_coverage=1 00:20:25.405 --rc genhtml_function_coverage=1 00:20:25.405 --rc genhtml_legend=1 00:20:25.405 --rc geninfo_all_blocks=1 00:20:25.405 --rc geninfo_unexecuted_blocks=1 00:20:25.405 00:20:25.405 ' 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:25.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.405 --rc genhtml_branch_coverage=1 00:20:25.405 --rc genhtml_function_coverage=1 00:20:25.405 --rc genhtml_legend=1 00:20:25.405 --rc geninfo_all_blocks=1 00:20:25.405 --rc geninfo_unexecuted_blocks=1 00:20:25.405 00:20:25.405 ' 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:25.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.405 --rc genhtml_branch_coverage=1 00:20:25.405 --rc genhtml_function_coverage=1 00:20:25.405 --rc genhtml_legend=1 00:20:25.405 --rc geninfo_all_blocks=1 00:20:25.405 --rc geninfo_unexecuted_blocks=1 00:20:25.405 00:20:25.405 ' 00:20:25.405 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:25.406 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@456 -- # nvmf_veth_init 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:25.406 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:25.407 Cannot find device "nvmf_init_br" 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:25.407 Cannot find device "nvmf_init_br2" 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:25.407 Cannot find device "nvmf_tgt_br" 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:25.407 Cannot find device "nvmf_tgt_br2" 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:25.407 Cannot find device "nvmf_init_br" 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:25.407 Cannot find device "nvmf_init_br2" 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:25.407 Cannot find device "nvmf_tgt_br" 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:25.407 Cannot find device "nvmf_tgt_br2" 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:25.407 Cannot find device "nvmf_br" 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:20:25.407 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:25.407 Cannot find device "nvmf_init_if" 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:25.666 Cannot find device "nvmf_init_if2" 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:25.666 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:25.666 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:25.666 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:25.666 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:20:25.666 00:20:25.666 --- 10.0.0.3 ping statistics --- 00:20:25.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.666 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:25.666 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:25.666 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:20:25.666 00:20:25.666 --- 10.0.0.4 ping statistics --- 00:20:25.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.666 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:25.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:25.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:20:25.666 00:20:25.666 --- 10.0.0.1 ping statistics --- 00:20:25.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.666 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:20:25.666 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:25.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:25.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:20:25.667 00:20:25.667 --- 10.0.0.2 ping statistics --- 00:20:25.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.667 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:20:25.667 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:25.667 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@457 -- # return 0 00:20:25.667 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:25.667 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:25.667 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:25.667 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:25.667 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:25.667 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:25.667 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:25.925 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:20:25.925 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:25.925 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:25.925 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:25.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.925 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # nvmfpid=91832 00:20:25.925 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:25.925 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # waitforlisten 91832 00:20:25.925 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 91832 ']' 00:20:25.925 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.925 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:25.925 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.925 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:25.925 18:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:25.925 [2024-12-08 18:37:43.665905] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:20:25.925 [2024-12-08 18:37:43.666164] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.925 [2024-12-08 18:37:43.807949] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.184 [2024-12-08 18:37:43.880937] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.184 [2024-12-08 18:37:43.881248] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.184 [2024-12-08 18:37:43.881448] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.184 [2024-12-08 18:37:43.881610] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.184 [2024-12-08 18:37:43.881654] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.184 [2024-12-08 18:37:43.881796] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.184 [2024-12-08 18:37:43.937983] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:26.751 18:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:26.751 18:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:20:26.751 18:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:26.751 18:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:26.751 18:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:26.751 18:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.751 18:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:20:26.751 18:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.751 18:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:26.751 [2024-12-08 18:37:44.679393] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.010 [2024-12-08 18:37:44.687547] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:27.010 null0 00:20:27.010 [2024-12-08 18:37:44.719460] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:27.010 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:27.010 18:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.010 18:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=91864 00:20:27.010 18:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:20:27.010 18:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 91864 /tmp/host.sock 00:20:27.010 18:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 91864 ']' 00:20:27.010 18:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:20:27.010 18:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:27.010 18:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:27.010 18:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:27.010 18:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:27.010 [2024-12-08 18:37:44.798027] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:20:27.010 [2024-12-08 18:37:44.798298] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91864 ] 00:20:27.268 [2024-12-08 18:37:44.939483] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.268 [2024-12-08 18:37:45.008887] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.268 18:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:27.268 18:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:20:27.268 18:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:27.268 18:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:20:27.268 18:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.268 18:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:27.268 18:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.268 18:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:20:27.268 18:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.268 18:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:27.268 [2024-12-08 18:37:45.113536] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:27.268 18:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.268 18:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:20:27.268 18:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.268 18:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:28.652 [2024-12-08 18:37:46.164779] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:28.652 [2024-12-08 18:37:46.164805] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:28.652 [2024-12-08 18:37:46.164820] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:28.652 [2024-12-08 18:37:46.170838] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:20:28.652 [2024-12-08 18:37:46.227345] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:28.652 [2024-12-08 18:37:46.227397] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:28.652 [2024-12-08 18:37:46.227459] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:28.652 [2024-12-08 18:37:46.227475] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:28.652 [2024-12-08 18:37:46.227495] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:28.652 18:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.652 18:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:20:28.652 18:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:28.652 18:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:28.652 18:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:28.652 18:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.652 [2024-12-08 18:37:46.233494] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xb866f0 was disconnected and freed. delete nvme_qpair. 00:20:28.652 18:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:28.652 18:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:28.652 18:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:28.652 18:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.652 18:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:20:28.652 18:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:20:28.652 18:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:20:28.652 18:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:20:28.652 18:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:28.652 18:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:28.652 18:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:28.652 18:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:28.652 18:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:28.652 18:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.652 18:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:28.652 18:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.652 18:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:28.652 18:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:29.590 18:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:29.590 18:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:29.590 18:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:29.590 18:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.590 18:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:29.590 18:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:29.590 18:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:29.590 18:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.590 18:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:29.590 18:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:30.528 18:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:30.528 18:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:30.528 18:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:30.528 18:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:30.528 18:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.528 18:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:30.528 18:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:30.528 18:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.788 18:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:30.788 18:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:31.725 18:37:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:31.725 18:37:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:31.725 18:37:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:31.725 18:37:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:31.725 18:37:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.725 18:37:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:31.725 18:37:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:31.725 18:37:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.725 18:37:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:31.725 18:37:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:32.661 18:37:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:32.661 18:37:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:32.661 18:37:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:32.661 18:37:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:32.661 18:37:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.661 18:37:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:32.661 18:37:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:32.661 18:37:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.661 18:37:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:32.661 18:37:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:34.047 18:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:34.047 18:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:34.047 18:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:34.047 18:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.047 18:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:34.047 18:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:34.047 18:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:34.047 18:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.047 18:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:34.047 18:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:34.047 [2024-12-08 18:37:51.655848] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:20:34.047 [2024-12-08 18:37:51.655924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:34.047 [2024-12-08 18:37:51.655940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.047 [2024-12-08 18:37:51.655952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:34.047 [2024-12-08 18:37:51.655961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.047 [2024-12-08 18:37:51.655970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:34.047 [2024-12-08 18:37:51.655978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.047 [2024-12-08 18:37:51.655987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:34.047 [2024-12-08 18:37:51.655996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.047 [2024-12-08 18:37:51.656005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:34.047 [2024-12-08 18:37:51.656013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.047 [2024-12-08 18:37:51.656022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61c40 is same with the state(6) to be set 00:20:34.047 [2024-12-08 18:37:51.665846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb61c40 (9): Bad file descriptor 00:20:34.047 [2024-12-08 18:37:51.675863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:34.981 18:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:34.981 18:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:34.981 18:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:34.981 18:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.981 18:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:34.981 18:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:34.981 18:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:34.981 [2024-12-08 18:37:52.685489] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:20:34.981 [2024-12-08 18:37:52.685722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb61c40 with addr=10.0.0.3, port=4420 00:20:34.981 [2024-12-08 18:37:52.685752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61c40 is same with the state(6) to be set 00:20:34.981 [2024-12-08 18:37:52.685788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb61c40 (9): Bad file descriptor 00:20:34.981 [2024-12-08 18:37:52.686258] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:34.981 [2024-12-08 18:37:52.686296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:34.981 [2024-12-08 18:37:52.686309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:34.981 [2024-12-08 18:37:52.686325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:34.981 [2024-12-08 18:37:52.686349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:34.981 [2024-12-08 18:37:52.686362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:34.981 18:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.981 18:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:34.981 18:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:35.918 [2024-12-08 18:37:53.686431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:35.918 [2024-12-08 18:37:53.686485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:35.918 [2024-12-08 18:37:53.686500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:35.918 [2024-12-08 18:37:53.686515] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:20:35.918 [2024-12-08 18:37:53.686544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:35.918 [2024-12-08 18:37:53.686583] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:20:35.918 [2024-12-08 18:37:53.686627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.918 [2024-12-08 18:37:53.686648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.918 [2024-12-08 18:37:53.686666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.918 [2024-12-08 18:37:53.686680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.918 [2024-12-08 18:37:53.686693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.918 [2024-12-08 18:37:53.686705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.918 [2024-12-08 18:37:53.686718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.918 [2024-12-08 18:37:53.686731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.918 [2024-12-08 18:37:53.686745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.918 [2024-12-08 18:37:53.686757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.918 [2024-12-08 18:37:53.686770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:20:35.918 [2024-12-08 18:37:53.686905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb50180 (9): Bad file descriptor 00:20:35.918 [2024-12-08 18:37:53.687924] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:20:35.918 [2024-12-08 18:37:53.687952] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:20:35.918 18:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:35.918 18:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:35.918 18:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:35.918 18:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.918 18:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:35.918 18:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:35.918 18:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:35.918 18:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.918 18:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:20:35.918 18:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:35.918 18:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:35.918 18:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:20:35.918 18:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:35.918 18:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:35.918 18:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:35.918 18:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.918 18:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:35.918 18:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:35.918 18:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:35.918 18:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.918 18:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:35.918 18:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:37.306 18:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:37.306 18:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:37.306 18:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:37.306 18:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:37.306 18:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.306 18:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:37.306 18:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:37.306 18:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.306 18:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:37.306 18:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:37.874 [2024-12-08 18:37:55.698178] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:37.874 [2024-12-08 18:37:55.698199] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:37.874 [2024-12-08 18:37:55.698216] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:37.874 [2024-12-08 18:37:55.704214] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:20:37.874 [2024-12-08 18:37:55.760780] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:37.874 [2024-12-08 18:37:55.760948] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:37.874 [2024-12-08 18:37:55.761010] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:37.874 [2024-12-08 18:37:55.761109] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:20:37.874 [2024-12-08 18:37:55.761166] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:37.874 [2024-12-08 18:37:55.766781] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xb95af0 was disconnected and freed. delete nvme_qpair. 00:20:38.133 18:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:38.133 18:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:38.133 18:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:38.133 18:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.133 18:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:38.133 18:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:38.133 18:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:38.133 18:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.133 18:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:20:38.133 18:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:20:38.133 18:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 91864 00:20:38.133 18:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 91864 ']' 00:20:38.133 18:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 91864 00:20:38.133 18:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:20:38.133 18:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:38.133 18:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91864 00:20:38.133 18:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:38.133 18:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:38.133 killing process with pid 91864 00:20:38.133 18:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91864' 00:20:38.133 18:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 91864 00:20:38.133 18:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 91864 00:20:38.391 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:20:38.391 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:38.391 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:20:38.391 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:38.391 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:20:38.391 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:38.391 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:38.391 rmmod nvme_tcp 00:20:38.391 rmmod nvme_fabrics 00:20:38.650 rmmod nvme_keyring 00:20:38.651 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:38.651 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:20:38.651 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:20:38.651 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@513 -- # '[' -n 91832 ']' 00:20:38.651 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # killprocess 91832 00:20:38.651 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 91832 ']' 00:20:38.651 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 91832 00:20:38.651 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:20:38.651 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:38.651 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91832 00:20:38.651 killing process with pid 91832 00:20:38.651 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:38.651 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:38.651 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91832' 00:20:38.651 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 91832 00:20:38.651 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 91832 00:20:38.910 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:38.910 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:38.910 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:38.910 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:20:38.910 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-save 00:20:38.910 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:38.910 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-restore 00:20:38.910 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:38.910 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:38.910 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:38.910 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:38.910 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:38.910 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:38.910 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:38.910 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:38.910 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:38.910 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:38.910 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:38.910 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:38.910 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:38.910 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:38.910 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:39.169 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:39.169 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.169 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.169 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.169 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:20:39.169 00:20:39.169 real 0m13.902s 00:20:39.169 user 0m23.149s 00:20:39.169 sys 0m2.555s 00:20:39.169 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:39.169 18:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:39.169 ************************************ 00:20:39.169 END TEST nvmf_discovery_remove_ifc 00:20:39.169 ************************************ 00:20:39.169 18:37:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:39.169 18:37:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:39.169 18:37:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:39.169 18:37:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.169 ************************************ 00:20:39.169 START TEST nvmf_identify_kernel_target 00:20:39.169 ************************************ 00:20:39.169 18:37:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:39.169 * Looking for test storage... 00:20:39.169 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:39.169 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:39.169 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:20:39.169 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:39.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.430 --rc genhtml_branch_coverage=1 00:20:39.430 --rc genhtml_function_coverage=1 00:20:39.430 --rc genhtml_legend=1 00:20:39.430 --rc geninfo_all_blocks=1 00:20:39.430 --rc geninfo_unexecuted_blocks=1 00:20:39.430 00:20:39.430 ' 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:39.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.430 --rc genhtml_branch_coverage=1 00:20:39.430 --rc genhtml_function_coverage=1 00:20:39.430 --rc genhtml_legend=1 00:20:39.430 --rc geninfo_all_blocks=1 00:20:39.430 --rc geninfo_unexecuted_blocks=1 00:20:39.430 00:20:39.430 ' 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:39.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.430 --rc genhtml_branch_coverage=1 00:20:39.430 --rc genhtml_function_coverage=1 00:20:39.430 --rc genhtml_legend=1 00:20:39.430 --rc geninfo_all_blocks=1 00:20:39.430 --rc geninfo_unexecuted_blocks=1 00:20:39.430 00:20:39.430 ' 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:39.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.430 --rc genhtml_branch_coverage=1 00:20:39.430 --rc genhtml_function_coverage=1 00:20:39.430 --rc genhtml_legend=1 00:20:39.430 --rc geninfo_all_blocks=1 00:20:39.430 --rc geninfo_unexecuted_blocks=1 00:20:39.430 00:20:39.430 ' 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.430 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:39.431 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:39.431 Cannot find device "nvmf_init_br" 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:39.431 Cannot find device "nvmf_init_br2" 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:39.431 Cannot find device "nvmf_tgt_br" 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:39.431 Cannot find device "nvmf_tgt_br2" 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:39.431 Cannot find device "nvmf_init_br" 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:39.431 Cannot find device "nvmf_init_br2" 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:39.431 Cannot find device "nvmf_tgt_br" 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:39.431 Cannot find device "nvmf_tgt_br2" 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:39.431 Cannot find device "nvmf_br" 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:39.431 Cannot find device "nvmf_init_if" 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:39.431 Cannot find device "nvmf_init_if2" 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:39.431 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:39.431 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:39.431 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:39.691 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:39.691 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:39.691 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:39.691 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:39.691 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:39.691 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:39.691 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:39.691 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:39.691 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:39.691 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:39.691 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:39.691 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:39.691 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:39.691 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:39.691 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:39.691 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:39.691 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:39.691 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:39.691 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:39.691 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:39.691 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:39.691 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:39.691 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:39.691 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:39.691 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:39.691 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:39.692 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:39.692 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:20:39.692 00:20:39.692 --- 10.0.0.3 ping statistics --- 00:20:39.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.692 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:39.692 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:39.692 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:20:39.692 00:20:39.692 --- 10.0.0.4 ping statistics --- 00:20:39.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.692 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:39.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:39.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:20:39.692 00:20:39.692 --- 10.0.0.1 ping statistics --- 00:20:39.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.692 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:39.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:39.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:20:39.692 00:20:39.692 --- 10.0.0.2 ping statistics --- 00:20:39.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.692 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # return 0 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@765 -- # local ip 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # local block nvme 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@666 -- # modprobe nvmet 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:39.692 18:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:40.261 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:40.261 Waiting for block devices as requested 00:20:40.261 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:40.261 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:40.520 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:40.521 No valid GPT data, bailing 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:40.521 No valid GPT data, bailing 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:40.521 No valid GPT data, bailing 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:40.521 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:40.781 No valid GPT data, bailing 00:20:40.781 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:40.781 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:40.781 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:40.781 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:20:40.781 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:20:40.781 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:40.781 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:40.781 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:40.781 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:40.781 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 1 00:20:40.781 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:20:40.781 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:20:40.781 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:20:40.781 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo tcp 00:20:40.781 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 4420 00:20:40.781 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo ipv4 00:20:40.781 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:40.781 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -a 10.0.0.1 -t tcp -s 4420 00:20:40.781 00:20:40.781 Discovery Log Number of Records 2, Generation counter 2 00:20:40.781 =====Discovery Log Entry 0====== 00:20:40.781 trtype: tcp 00:20:40.781 adrfam: ipv4 00:20:40.781 subtype: current discovery subsystem 00:20:40.781 treq: not specified, sq flow control disable supported 00:20:40.781 portid: 1 00:20:40.781 trsvcid: 4420 00:20:40.781 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:40.781 traddr: 10.0.0.1 00:20:40.781 eflags: none 00:20:40.781 sectype: none 00:20:40.781 =====Discovery Log Entry 1====== 00:20:40.781 trtype: tcp 00:20:40.781 adrfam: ipv4 00:20:40.781 subtype: nvme subsystem 00:20:40.781 treq: not specified, sq flow control disable supported 00:20:40.781 portid: 1 00:20:40.781 trsvcid: 4420 00:20:40.781 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:40.781 traddr: 10.0.0.1 00:20:40.781 eflags: none 00:20:40.781 sectype: none 00:20:40.781 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:20:40.781 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:20:41.042 ===================================================== 00:20:41.042 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:41.042 ===================================================== 00:20:41.042 Controller Capabilities/Features 00:20:41.042 ================================ 00:20:41.042 Vendor ID: 0000 00:20:41.042 Subsystem Vendor ID: 0000 00:20:41.042 Serial Number: e5513208fa7dcddb07f7 00:20:41.042 Model Number: Linux 00:20:41.042 Firmware Version: 6.8.9-20 00:20:41.042 Recommended Arb Burst: 0 00:20:41.042 IEEE OUI Identifier: 00 00 00 00:20:41.042 Multi-path I/O 00:20:41.042 May have multiple subsystem ports: No 00:20:41.042 May have multiple controllers: No 00:20:41.042 Associated with SR-IOV VF: No 00:20:41.042 Max Data Transfer Size: Unlimited 00:20:41.042 Max Number of Namespaces: 0 00:20:41.042 Max Number of I/O Queues: 1024 00:20:41.042 NVMe Specification Version (VS): 1.3 00:20:41.042 NVMe Specification Version (Identify): 1.3 00:20:41.042 Maximum Queue Entries: 1024 00:20:41.042 Contiguous Queues Required: No 00:20:41.042 Arbitration Mechanisms Supported 00:20:41.042 Weighted Round Robin: Not Supported 00:20:41.042 Vendor Specific: Not Supported 00:20:41.042 Reset Timeout: 7500 ms 00:20:41.042 Doorbell Stride: 4 bytes 00:20:41.042 NVM Subsystem Reset: Not Supported 00:20:41.042 Command Sets Supported 00:20:41.042 NVM Command Set: Supported 00:20:41.042 Boot Partition: Not Supported 00:20:41.042 Memory Page Size Minimum: 4096 bytes 00:20:41.042 Memory Page Size Maximum: 4096 bytes 00:20:41.042 Persistent Memory Region: Not Supported 00:20:41.042 Optional Asynchronous Events Supported 00:20:41.042 Namespace Attribute Notices: Not Supported 00:20:41.042 Firmware Activation Notices: Not Supported 00:20:41.042 ANA Change Notices: Not Supported 00:20:41.042 PLE Aggregate Log Change Notices: Not Supported 00:20:41.042 LBA Status Info Alert Notices: Not Supported 00:20:41.042 EGE Aggregate Log Change Notices: Not Supported 00:20:41.042 Normal NVM Subsystem Shutdown event: Not Supported 00:20:41.042 Zone Descriptor Change Notices: Not Supported 00:20:41.042 Discovery Log Change Notices: Supported 00:20:41.042 Controller Attributes 00:20:41.042 128-bit Host Identifier: Not Supported 00:20:41.042 Non-Operational Permissive Mode: Not Supported 00:20:41.042 NVM Sets: Not Supported 00:20:41.042 Read Recovery Levels: Not Supported 00:20:41.042 Endurance Groups: Not Supported 00:20:41.042 Predictable Latency Mode: Not Supported 00:20:41.042 Traffic Based Keep ALive: Not Supported 00:20:41.042 Namespace Granularity: Not Supported 00:20:41.042 SQ Associations: Not Supported 00:20:41.042 UUID List: Not Supported 00:20:41.042 Multi-Domain Subsystem: Not Supported 00:20:41.042 Fixed Capacity Management: Not Supported 00:20:41.042 Variable Capacity Management: Not Supported 00:20:41.042 Delete Endurance Group: Not Supported 00:20:41.042 Delete NVM Set: Not Supported 00:20:41.042 Extended LBA Formats Supported: Not Supported 00:20:41.042 Flexible Data Placement Supported: Not Supported 00:20:41.042 00:20:41.042 Controller Memory Buffer Support 00:20:41.042 ================================ 00:20:41.042 Supported: No 00:20:41.042 00:20:41.042 Persistent Memory Region Support 00:20:41.042 ================================ 00:20:41.042 Supported: No 00:20:41.042 00:20:41.042 Admin Command Set Attributes 00:20:41.042 ============================ 00:20:41.042 Security Send/Receive: Not Supported 00:20:41.042 Format NVM: Not Supported 00:20:41.042 Firmware Activate/Download: Not Supported 00:20:41.042 Namespace Management: Not Supported 00:20:41.042 Device Self-Test: Not Supported 00:20:41.042 Directives: Not Supported 00:20:41.042 NVMe-MI: Not Supported 00:20:41.042 Virtualization Management: Not Supported 00:20:41.042 Doorbell Buffer Config: Not Supported 00:20:41.043 Get LBA Status Capability: Not Supported 00:20:41.043 Command & Feature Lockdown Capability: Not Supported 00:20:41.043 Abort Command Limit: 1 00:20:41.043 Async Event Request Limit: 1 00:20:41.043 Number of Firmware Slots: N/A 00:20:41.043 Firmware Slot 1 Read-Only: N/A 00:20:41.043 Firmware Activation Without Reset: N/A 00:20:41.043 Multiple Update Detection Support: N/A 00:20:41.043 Firmware Update Granularity: No Information Provided 00:20:41.043 Per-Namespace SMART Log: No 00:20:41.043 Asymmetric Namespace Access Log Page: Not Supported 00:20:41.043 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:41.043 Command Effects Log Page: Not Supported 00:20:41.043 Get Log Page Extended Data: Supported 00:20:41.043 Telemetry Log Pages: Not Supported 00:20:41.043 Persistent Event Log Pages: Not Supported 00:20:41.043 Supported Log Pages Log Page: May Support 00:20:41.043 Commands Supported & Effects Log Page: Not Supported 00:20:41.043 Feature Identifiers & Effects Log Page:May Support 00:20:41.043 NVMe-MI Commands & Effects Log Page: May Support 00:20:41.043 Data Area 4 for Telemetry Log: Not Supported 00:20:41.043 Error Log Page Entries Supported: 1 00:20:41.043 Keep Alive: Not Supported 00:20:41.043 00:20:41.043 NVM Command Set Attributes 00:20:41.043 ========================== 00:20:41.043 Submission Queue Entry Size 00:20:41.043 Max: 1 00:20:41.043 Min: 1 00:20:41.043 Completion Queue Entry Size 00:20:41.043 Max: 1 00:20:41.043 Min: 1 00:20:41.043 Number of Namespaces: 0 00:20:41.043 Compare Command: Not Supported 00:20:41.043 Write Uncorrectable Command: Not Supported 00:20:41.043 Dataset Management Command: Not Supported 00:20:41.043 Write Zeroes Command: Not Supported 00:20:41.043 Set Features Save Field: Not Supported 00:20:41.043 Reservations: Not Supported 00:20:41.043 Timestamp: Not Supported 00:20:41.043 Copy: Not Supported 00:20:41.043 Volatile Write Cache: Not Present 00:20:41.043 Atomic Write Unit (Normal): 1 00:20:41.043 Atomic Write Unit (PFail): 1 00:20:41.043 Atomic Compare & Write Unit: 1 00:20:41.043 Fused Compare & Write: Not Supported 00:20:41.043 Scatter-Gather List 00:20:41.043 SGL Command Set: Supported 00:20:41.043 SGL Keyed: Not Supported 00:20:41.043 SGL Bit Bucket Descriptor: Not Supported 00:20:41.043 SGL Metadata Pointer: Not Supported 00:20:41.043 Oversized SGL: Not Supported 00:20:41.043 SGL Metadata Address: Not Supported 00:20:41.043 SGL Offset: Supported 00:20:41.043 Transport SGL Data Block: Not Supported 00:20:41.043 Replay Protected Memory Block: Not Supported 00:20:41.043 00:20:41.043 Firmware Slot Information 00:20:41.043 ========================= 00:20:41.043 Active slot: 0 00:20:41.043 00:20:41.043 00:20:41.043 Error Log 00:20:41.043 ========= 00:20:41.043 00:20:41.043 Active Namespaces 00:20:41.043 ================= 00:20:41.043 Discovery Log Page 00:20:41.043 ================== 00:20:41.043 Generation Counter: 2 00:20:41.043 Number of Records: 2 00:20:41.043 Record Format: 0 00:20:41.043 00:20:41.043 Discovery Log Entry 0 00:20:41.043 ---------------------- 00:20:41.043 Transport Type: 3 (TCP) 00:20:41.043 Address Family: 1 (IPv4) 00:20:41.043 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:41.043 Entry Flags: 00:20:41.043 Duplicate Returned Information: 0 00:20:41.043 Explicit Persistent Connection Support for Discovery: 0 00:20:41.043 Transport Requirements: 00:20:41.043 Secure Channel: Not Specified 00:20:41.043 Port ID: 1 (0x0001) 00:20:41.043 Controller ID: 65535 (0xffff) 00:20:41.043 Admin Max SQ Size: 32 00:20:41.043 Transport Service Identifier: 4420 00:20:41.043 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:41.043 Transport Address: 10.0.0.1 00:20:41.043 Discovery Log Entry 1 00:20:41.043 ---------------------- 00:20:41.043 Transport Type: 3 (TCP) 00:20:41.043 Address Family: 1 (IPv4) 00:20:41.043 Subsystem Type: 2 (NVM Subsystem) 00:20:41.043 Entry Flags: 00:20:41.043 Duplicate Returned Information: 0 00:20:41.043 Explicit Persistent Connection Support for Discovery: 0 00:20:41.043 Transport Requirements: 00:20:41.043 Secure Channel: Not Specified 00:20:41.043 Port ID: 1 (0x0001) 00:20:41.043 Controller ID: 65535 (0xffff) 00:20:41.043 Admin Max SQ Size: 32 00:20:41.043 Transport Service Identifier: 4420 00:20:41.043 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:20:41.043 Transport Address: 10.0.0.1 00:20:41.043 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:41.043 get_feature(0x01) failed 00:20:41.043 get_feature(0x02) failed 00:20:41.043 get_feature(0x04) failed 00:20:41.043 ===================================================== 00:20:41.043 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:41.043 ===================================================== 00:20:41.043 Controller Capabilities/Features 00:20:41.043 ================================ 00:20:41.043 Vendor ID: 0000 00:20:41.043 Subsystem Vendor ID: 0000 00:20:41.043 Serial Number: a3994002ee8a17f302a3 00:20:41.043 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:20:41.043 Firmware Version: 6.8.9-20 00:20:41.043 Recommended Arb Burst: 6 00:20:41.043 IEEE OUI Identifier: 00 00 00 00:20:41.043 Multi-path I/O 00:20:41.043 May have multiple subsystem ports: Yes 00:20:41.043 May have multiple controllers: Yes 00:20:41.043 Associated with SR-IOV VF: No 00:20:41.043 Max Data Transfer Size: Unlimited 00:20:41.043 Max Number of Namespaces: 1024 00:20:41.043 Max Number of I/O Queues: 128 00:20:41.043 NVMe Specification Version (VS): 1.3 00:20:41.043 NVMe Specification Version (Identify): 1.3 00:20:41.043 Maximum Queue Entries: 1024 00:20:41.043 Contiguous Queues Required: No 00:20:41.043 Arbitration Mechanisms Supported 00:20:41.043 Weighted Round Robin: Not Supported 00:20:41.043 Vendor Specific: Not Supported 00:20:41.043 Reset Timeout: 7500 ms 00:20:41.043 Doorbell Stride: 4 bytes 00:20:41.043 NVM Subsystem Reset: Not Supported 00:20:41.043 Command Sets Supported 00:20:41.043 NVM Command Set: Supported 00:20:41.043 Boot Partition: Not Supported 00:20:41.043 Memory Page Size Minimum: 4096 bytes 00:20:41.043 Memory Page Size Maximum: 4096 bytes 00:20:41.043 Persistent Memory Region: Not Supported 00:20:41.043 Optional Asynchronous Events Supported 00:20:41.043 Namespace Attribute Notices: Supported 00:20:41.043 Firmware Activation Notices: Not Supported 00:20:41.043 ANA Change Notices: Supported 00:20:41.043 PLE Aggregate Log Change Notices: Not Supported 00:20:41.043 LBA Status Info Alert Notices: Not Supported 00:20:41.043 EGE Aggregate Log Change Notices: Not Supported 00:20:41.043 Normal NVM Subsystem Shutdown event: Not Supported 00:20:41.043 Zone Descriptor Change Notices: Not Supported 00:20:41.043 Discovery Log Change Notices: Not Supported 00:20:41.043 Controller Attributes 00:20:41.043 128-bit Host Identifier: Supported 00:20:41.043 Non-Operational Permissive Mode: Not Supported 00:20:41.043 NVM Sets: Not Supported 00:20:41.043 Read Recovery Levels: Not Supported 00:20:41.043 Endurance Groups: Not Supported 00:20:41.043 Predictable Latency Mode: Not Supported 00:20:41.043 Traffic Based Keep ALive: Supported 00:20:41.043 Namespace Granularity: Not Supported 00:20:41.043 SQ Associations: Not Supported 00:20:41.043 UUID List: Not Supported 00:20:41.043 Multi-Domain Subsystem: Not Supported 00:20:41.043 Fixed Capacity Management: Not Supported 00:20:41.043 Variable Capacity Management: Not Supported 00:20:41.043 Delete Endurance Group: Not Supported 00:20:41.043 Delete NVM Set: Not Supported 00:20:41.043 Extended LBA Formats Supported: Not Supported 00:20:41.043 Flexible Data Placement Supported: Not Supported 00:20:41.043 00:20:41.043 Controller Memory Buffer Support 00:20:41.043 ================================ 00:20:41.043 Supported: No 00:20:41.043 00:20:41.043 Persistent Memory Region Support 00:20:41.043 ================================ 00:20:41.043 Supported: No 00:20:41.043 00:20:41.043 Admin Command Set Attributes 00:20:41.043 ============================ 00:20:41.043 Security Send/Receive: Not Supported 00:20:41.043 Format NVM: Not Supported 00:20:41.043 Firmware Activate/Download: Not Supported 00:20:41.043 Namespace Management: Not Supported 00:20:41.043 Device Self-Test: Not Supported 00:20:41.043 Directives: Not Supported 00:20:41.043 NVMe-MI: Not Supported 00:20:41.043 Virtualization Management: Not Supported 00:20:41.043 Doorbell Buffer Config: Not Supported 00:20:41.043 Get LBA Status Capability: Not Supported 00:20:41.043 Command & Feature Lockdown Capability: Not Supported 00:20:41.043 Abort Command Limit: 4 00:20:41.043 Async Event Request Limit: 4 00:20:41.043 Number of Firmware Slots: N/A 00:20:41.043 Firmware Slot 1 Read-Only: N/A 00:20:41.044 Firmware Activation Without Reset: N/A 00:20:41.044 Multiple Update Detection Support: N/A 00:20:41.044 Firmware Update Granularity: No Information Provided 00:20:41.044 Per-Namespace SMART Log: Yes 00:20:41.044 Asymmetric Namespace Access Log Page: Supported 00:20:41.044 ANA Transition Time : 10 sec 00:20:41.044 00:20:41.044 Asymmetric Namespace Access Capabilities 00:20:41.044 ANA Optimized State : Supported 00:20:41.044 ANA Non-Optimized State : Supported 00:20:41.044 ANA Inaccessible State : Supported 00:20:41.044 ANA Persistent Loss State : Supported 00:20:41.044 ANA Change State : Supported 00:20:41.044 ANAGRPID is not changed : No 00:20:41.044 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:20:41.044 00:20:41.044 ANA Group Identifier Maximum : 128 00:20:41.044 Number of ANA Group Identifiers : 128 00:20:41.044 Max Number of Allowed Namespaces : 1024 00:20:41.044 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:20:41.044 Command Effects Log Page: Supported 00:20:41.044 Get Log Page Extended Data: Supported 00:20:41.044 Telemetry Log Pages: Not Supported 00:20:41.044 Persistent Event Log Pages: Not Supported 00:20:41.044 Supported Log Pages Log Page: May Support 00:20:41.044 Commands Supported & Effects Log Page: Not Supported 00:20:41.044 Feature Identifiers & Effects Log Page:May Support 00:20:41.044 NVMe-MI Commands & Effects Log Page: May Support 00:20:41.044 Data Area 4 for Telemetry Log: Not Supported 00:20:41.044 Error Log Page Entries Supported: 128 00:20:41.044 Keep Alive: Supported 00:20:41.044 Keep Alive Granularity: 1000 ms 00:20:41.044 00:20:41.044 NVM Command Set Attributes 00:20:41.044 ========================== 00:20:41.044 Submission Queue Entry Size 00:20:41.044 Max: 64 00:20:41.044 Min: 64 00:20:41.044 Completion Queue Entry Size 00:20:41.044 Max: 16 00:20:41.044 Min: 16 00:20:41.044 Number of Namespaces: 1024 00:20:41.044 Compare Command: Not Supported 00:20:41.044 Write Uncorrectable Command: Not Supported 00:20:41.044 Dataset Management Command: Supported 00:20:41.044 Write Zeroes Command: Supported 00:20:41.044 Set Features Save Field: Not Supported 00:20:41.044 Reservations: Not Supported 00:20:41.044 Timestamp: Not Supported 00:20:41.044 Copy: Not Supported 00:20:41.044 Volatile Write Cache: Present 00:20:41.044 Atomic Write Unit (Normal): 1 00:20:41.044 Atomic Write Unit (PFail): 1 00:20:41.044 Atomic Compare & Write Unit: 1 00:20:41.044 Fused Compare & Write: Not Supported 00:20:41.044 Scatter-Gather List 00:20:41.044 SGL Command Set: Supported 00:20:41.044 SGL Keyed: Not Supported 00:20:41.044 SGL Bit Bucket Descriptor: Not Supported 00:20:41.044 SGL Metadata Pointer: Not Supported 00:20:41.044 Oversized SGL: Not Supported 00:20:41.044 SGL Metadata Address: Not Supported 00:20:41.044 SGL Offset: Supported 00:20:41.044 Transport SGL Data Block: Not Supported 00:20:41.044 Replay Protected Memory Block: Not Supported 00:20:41.044 00:20:41.044 Firmware Slot Information 00:20:41.044 ========================= 00:20:41.044 Active slot: 0 00:20:41.044 00:20:41.044 Asymmetric Namespace Access 00:20:41.044 =========================== 00:20:41.044 Change Count : 0 00:20:41.044 Number of ANA Group Descriptors : 1 00:20:41.044 ANA Group Descriptor : 0 00:20:41.044 ANA Group ID : 1 00:20:41.044 Number of NSID Values : 1 00:20:41.044 Change Count : 0 00:20:41.044 ANA State : 1 00:20:41.044 Namespace Identifier : 1 00:20:41.044 00:20:41.044 Commands Supported and Effects 00:20:41.044 ============================== 00:20:41.044 Admin Commands 00:20:41.044 -------------- 00:20:41.044 Get Log Page (02h): Supported 00:20:41.044 Identify (06h): Supported 00:20:41.044 Abort (08h): Supported 00:20:41.044 Set Features (09h): Supported 00:20:41.044 Get Features (0Ah): Supported 00:20:41.044 Asynchronous Event Request (0Ch): Supported 00:20:41.044 Keep Alive (18h): Supported 00:20:41.044 I/O Commands 00:20:41.044 ------------ 00:20:41.044 Flush (00h): Supported 00:20:41.044 Write (01h): Supported LBA-Change 00:20:41.044 Read (02h): Supported 00:20:41.044 Write Zeroes (08h): Supported LBA-Change 00:20:41.044 Dataset Management (09h): Supported 00:20:41.044 00:20:41.044 Error Log 00:20:41.044 ========= 00:20:41.044 Entry: 0 00:20:41.044 Error Count: 0x3 00:20:41.044 Submission Queue Id: 0x0 00:20:41.044 Command Id: 0x5 00:20:41.044 Phase Bit: 0 00:20:41.044 Status Code: 0x2 00:20:41.044 Status Code Type: 0x0 00:20:41.044 Do Not Retry: 1 00:20:41.044 Error Location: 0x28 00:20:41.044 LBA: 0x0 00:20:41.044 Namespace: 0x0 00:20:41.044 Vendor Log Page: 0x0 00:20:41.044 ----------- 00:20:41.044 Entry: 1 00:20:41.044 Error Count: 0x2 00:20:41.044 Submission Queue Id: 0x0 00:20:41.044 Command Id: 0x5 00:20:41.044 Phase Bit: 0 00:20:41.044 Status Code: 0x2 00:20:41.044 Status Code Type: 0x0 00:20:41.044 Do Not Retry: 1 00:20:41.044 Error Location: 0x28 00:20:41.044 LBA: 0x0 00:20:41.044 Namespace: 0x0 00:20:41.044 Vendor Log Page: 0x0 00:20:41.044 ----------- 00:20:41.044 Entry: 2 00:20:41.044 Error Count: 0x1 00:20:41.044 Submission Queue Id: 0x0 00:20:41.044 Command Id: 0x4 00:20:41.044 Phase Bit: 0 00:20:41.044 Status Code: 0x2 00:20:41.044 Status Code Type: 0x0 00:20:41.044 Do Not Retry: 1 00:20:41.044 Error Location: 0x28 00:20:41.044 LBA: 0x0 00:20:41.044 Namespace: 0x0 00:20:41.044 Vendor Log Page: 0x0 00:20:41.044 00:20:41.044 Number of Queues 00:20:41.044 ================ 00:20:41.044 Number of I/O Submission Queues: 128 00:20:41.044 Number of I/O Completion Queues: 128 00:20:41.044 00:20:41.044 ZNS Specific Controller Data 00:20:41.044 ============================ 00:20:41.044 Zone Append Size Limit: 0 00:20:41.044 00:20:41.044 00:20:41.044 Active Namespaces 00:20:41.044 ================= 00:20:41.044 get_feature(0x05) failed 00:20:41.044 Namespace ID:1 00:20:41.044 Command Set Identifier: NVM (00h) 00:20:41.044 Deallocate: Supported 00:20:41.044 Deallocated/Unwritten Error: Not Supported 00:20:41.044 Deallocated Read Value: Unknown 00:20:41.044 Deallocate in Write Zeroes: Not Supported 00:20:41.044 Deallocated Guard Field: 0xFFFF 00:20:41.044 Flush: Supported 00:20:41.044 Reservation: Not Supported 00:20:41.044 Namespace Sharing Capabilities: Multiple Controllers 00:20:41.044 Size (in LBAs): 1310720 (5GiB) 00:20:41.044 Capacity (in LBAs): 1310720 (5GiB) 00:20:41.044 Utilization (in LBAs): 1310720 (5GiB) 00:20:41.044 UUID: 90bba2c6-3dfb-4597-86f2-76590fa2f9df 00:20:41.044 Thin Provisioning: Not Supported 00:20:41.044 Per-NS Atomic Units: Yes 00:20:41.044 Atomic Boundary Size (Normal): 0 00:20:41.044 Atomic Boundary Size (PFail): 0 00:20:41.044 Atomic Boundary Offset: 0 00:20:41.044 NGUID/EUI64 Never Reused: No 00:20:41.044 ANA group ID: 1 00:20:41.044 Namespace Write Protected: No 00:20:41.044 Number of LBA Formats: 1 00:20:41.044 Current LBA Format: LBA Format #00 00:20:41.044 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:20:41.044 00:20:41.044 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:20:41.044 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:41.044 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:20:41.304 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:41.304 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:20:41.304 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:41.304 18:37:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:41.304 rmmod nvme_tcp 00:20:41.304 rmmod nvme_fabrics 00:20:41.304 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:41.304 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:20:41.304 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:20:41.304 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:20:41.304 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:41.304 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:41.304 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:41.304 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:20:41.304 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-save 00:20:41.304 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-restore 00:20:41.304 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:41.304 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:41.304 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:41.304 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:41.304 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:41.304 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:41.304 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:41.304 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:41.304 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:41.304 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:41.304 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:41.304 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:41.304 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:41.304 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:41.304 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:41.304 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:41.564 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:41.564 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.564 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.564 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.564 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:20:41.564 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:20:41.564 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:41.564 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # echo 0 00:20:41.564 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:41.564 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:41.564 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:41.564 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:41.564 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:20:41.564 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:20:41.564 18:37:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:42.133 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:42.393 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:42.393 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:42.393 00:20:42.393 real 0m3.266s 00:20:42.393 user 0m1.150s 00:20:42.393 sys 0m1.489s 00:20:42.393 ************************************ 00:20:42.393 END TEST nvmf_identify_kernel_target 00:20:42.393 ************************************ 00:20:42.393 18:38:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:42.393 18:38:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.393 18:38:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:42.393 18:38:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:42.393 18:38:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:42.393 18:38:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.393 ************************************ 00:20:42.393 START TEST nvmf_auth_host 00:20:42.393 ************************************ 00:20:42.393 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:42.653 * Looking for test storage... 00:20:42.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:42.653 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:42.653 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:20:42.653 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:42.653 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:42.653 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:42.653 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:42.653 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:42.653 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.653 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:42.653 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:42.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.654 --rc genhtml_branch_coverage=1 00:20:42.654 --rc genhtml_function_coverage=1 00:20:42.654 --rc genhtml_legend=1 00:20:42.654 --rc geninfo_all_blocks=1 00:20:42.654 --rc geninfo_unexecuted_blocks=1 00:20:42.654 00:20:42.654 ' 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:42.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.654 --rc genhtml_branch_coverage=1 00:20:42.654 --rc genhtml_function_coverage=1 00:20:42.654 --rc genhtml_legend=1 00:20:42.654 --rc geninfo_all_blocks=1 00:20:42.654 --rc geninfo_unexecuted_blocks=1 00:20:42.654 00:20:42.654 ' 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:42.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.654 --rc genhtml_branch_coverage=1 00:20:42.654 --rc genhtml_function_coverage=1 00:20:42.654 --rc genhtml_legend=1 00:20:42.654 --rc geninfo_all_blocks=1 00:20:42.654 --rc geninfo_unexecuted_blocks=1 00:20:42.654 00:20:42.654 ' 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:42.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.654 --rc genhtml_branch_coverage=1 00:20:42.654 --rc genhtml_function_coverage=1 00:20:42.654 --rc genhtml_legend=1 00:20:42.654 --rc geninfo_all_blocks=1 00:20:42.654 --rc geninfo_unexecuted_blocks=1 00:20:42.654 00:20:42.654 ' 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:42.654 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:42.654 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@456 -- # nvmf_veth_init 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:42.655 Cannot find device "nvmf_init_br" 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:42.655 Cannot find device "nvmf_init_br2" 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:42.655 Cannot find device "nvmf_tgt_br" 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:42.655 Cannot find device "nvmf_tgt_br2" 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:42.655 Cannot find device "nvmf_init_br" 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:42.655 Cannot find device "nvmf_init_br2" 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:42.655 Cannot find device "nvmf_tgt_br" 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:42.655 Cannot find device "nvmf_tgt_br2" 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:20:42.655 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:42.915 Cannot find device "nvmf_br" 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:42.916 Cannot find device "nvmf_init_if" 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:42.916 Cannot find device "nvmf_init_if2" 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:42.916 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:42.916 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:42.916 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:43.176 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:43.176 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:20:43.176 00:20:43.176 --- 10.0.0.3 ping statistics --- 00:20:43.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.176 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:43.176 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:43.176 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:20:43.176 00:20:43.176 --- 10.0.0.4 ping statistics --- 00:20:43.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.176 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:43.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:20:43.176 00:20:43.176 --- 10.0.0.1 ping statistics --- 00:20:43.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.176 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:43.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:20:43.176 00:20:43.176 --- 10.0.0.2 ping statistics --- 00:20:43.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.176 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # return 0 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # nvmfpid=92848 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # waitforlisten 92848 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 92848 ']' 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:43.176 18:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.435 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:43.435 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:20:43.435 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:43.435 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:43.435 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.435 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.435 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:20:43.435 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:20:43.435 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:43.435 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:43.435 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:43.435 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:20:43.435 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:20:43.435 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:43.435 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=fc732bf5e54fcf949f649b0e0040fee5 00:20:43.435 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:20:43.435 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.dfH 00:20:43.435 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key fc732bf5e54fcf949f649b0e0040fee5 0 00:20:43.435 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 fc732bf5e54fcf949f649b0e0040fee5 0 00:20:43.435 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:43.435 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:43.435 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=fc732bf5e54fcf949f649b0e0040fee5 00:20:43.435 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:20:43.435 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.dfH 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.dfH 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.dfH 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=861c78392f81f6f9908a133d3bcc29b1b8fab34bc26fe801773db95a85dc77bc 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.YlL 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 861c78392f81f6f9908a133d3bcc29b1b8fab34bc26fe801773db95a85dc77bc 3 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 861c78392f81f6f9908a133d3bcc29b1b8fab34bc26fe801773db95a85dc77bc 3 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=861c78392f81f6f9908a133d3bcc29b1b8fab34bc26fe801773db95a85dc77bc 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.YlL 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.YlL 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.YlL 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=cf5dcade453770bc5e60d1e7d5b8cc3600d7da498264dcd3 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.e4b 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key cf5dcade453770bc5e60d1e7d5b8cc3600d7da498264dcd3 0 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 cf5dcade453770bc5e60d1e7d5b8cc3600d7da498264dcd3 0 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=cf5dcade453770bc5e60d1e7d5b8cc3600d7da498264dcd3 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.e4b 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.e4b 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.e4b 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=08375ff0f6a7e838dd73d885f067fd6f8d9bece68e2f37dc 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.gS3 00:20:43.695 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 08375ff0f6a7e838dd73d885f067fd6f8d9bece68e2f37dc 2 00:20:43.696 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 08375ff0f6a7e838dd73d885f067fd6f8d9bece68e2f37dc 2 00:20:43.696 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:43.696 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:43.696 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=08375ff0f6a7e838dd73d885f067fd6f8d9bece68e2f37dc 00:20:43.696 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:20:43.696 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:43.696 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.gS3 00:20:43.696 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.gS3 00:20:43.696 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.gS3 00:20:43.696 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:43.696 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:43.696 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:43.696 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:43.696 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:20:43.696 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:20:43.696 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:43.696 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=3738709c832d9778c4502e574dcf9fff 00:20:43.696 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:20:43.696 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.uEl 00:20:43.696 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 3738709c832d9778c4502e574dcf9fff 1 00:20:43.696 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 3738709c832d9778c4502e574dcf9fff 1 00:20:43.696 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:43.696 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:43.696 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=3738709c832d9778c4502e574dcf9fff 00:20:43.696 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:20:43.696 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:43.955 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.uEl 00:20:43.955 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.uEl 00:20:43.955 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.uEl 00:20:43.955 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:43.955 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:43.955 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:43.955 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:43.955 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:20:43.955 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=5b4ac68fe1117d30e796ac17ec1d0779 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.L1p 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 5b4ac68fe1117d30e796ac17ec1d0779 1 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 5b4ac68fe1117d30e796ac17ec1d0779 1 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=5b4ac68fe1117d30e796ac17ec1d0779 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.L1p 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.L1p 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.L1p 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=8f609b53f1dc056a66a2124301f25fd1b59b1cf9d33d146f 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.9bZ 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 8f609b53f1dc056a66a2124301f25fd1b59b1cf9d33d146f 2 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 8f609b53f1dc056a66a2124301f25fd1b59b1cf9d33d146f 2 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=8f609b53f1dc056a66a2124301f25fd1b59b1cf9d33d146f 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.9bZ 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.9bZ 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.9bZ 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=766ec10198ab16003fea765ab4473551 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.md5 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 766ec10198ab16003fea765ab4473551 0 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 766ec10198ab16003fea765ab4473551 0 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=766ec10198ab16003fea765ab4473551 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.md5 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.md5 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.md5 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=3483043b7a99285a8b625ddd787e6f975e6e624b1d3f2ee2b18a058bd6d7231c 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.FUq 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 3483043b7a99285a8b625ddd787e6f975e6e624b1d3f2ee2b18a058bd6d7231c 3 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 3483043b7a99285a8b625ddd787e6f975e6e624b1d3f2ee2b18a058bd6d7231c 3 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=3483043b7a99285a8b625ddd787e6f975e6e624b1d3f2ee2b18a058bd6d7231c 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:20:43.956 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:44.215 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.FUq 00:20:44.215 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.FUq 00:20:44.215 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.FUq 00:20:44.215 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:20:44.215 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 92848 00:20:44.215 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 92848 ']' 00:20:44.215 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.215 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:44.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.215 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.215 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:44.215 18:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.dfH 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.YlL ]] 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YlL 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.e4b 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.gS3 ]] 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gS3 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.uEl 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.L1p ]] 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.L1p 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.9bZ 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.md5 ]] 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.md5 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.FUq 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # local block nvme 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@666 -- # modprobe nvmet 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:44.474 18:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:45.042 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:45.042 Waiting for block devices as requested 00:20:45.042 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:45.042 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:45.610 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:45.610 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:45.610 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:20:45.610 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:45.610 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:45.610 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:45.610 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:20:45.610 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:45.610 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:45.610 No valid GPT data, bailing 00:20:45.610 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:45.869 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:45.869 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:45.869 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:20:45.869 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:45.869 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:45.869 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:20:45.869 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:20:45.869 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:45.869 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:45.869 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:20:45.869 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:45.869 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:45.869 No valid GPT data, bailing 00:20:45.869 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:45.869 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:45.869 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:45.869 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:20:45.869 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:45.869 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:45.869 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:20:45.869 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:20:45.870 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:45.870 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:45.870 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:20:45.870 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:45.870 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:45.870 No valid GPT data, bailing 00:20:45.870 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:45.870 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:45.870 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:45.870 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:20:45.870 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:45.870 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:45.870 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:20:45.870 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:45.870 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:45.870 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:45.870 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:20:45.870 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:45.870 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:45.870 No valid GPT data, bailing 00:20:45.870 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:45.870 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:45.870 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:45.870 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:20:45.870 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:20:45.870 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:45.870 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:45.870 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:46.129 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 1 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo tcp 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 4420 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo ipv4 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -a 10.0.0.1 -t tcp -s 4420 00:20:46.130 00:20:46.130 Discovery Log Number of Records 2, Generation counter 2 00:20:46.130 =====Discovery Log Entry 0====== 00:20:46.130 trtype: tcp 00:20:46.130 adrfam: ipv4 00:20:46.130 subtype: current discovery subsystem 00:20:46.130 treq: not specified, sq flow control disable supported 00:20:46.130 portid: 1 00:20:46.130 trsvcid: 4420 00:20:46.130 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:46.130 traddr: 10.0.0.1 00:20:46.130 eflags: none 00:20:46.130 sectype: none 00:20:46.130 =====Discovery Log Entry 1====== 00:20:46.130 trtype: tcp 00:20:46.130 adrfam: ipv4 00:20:46.130 subtype: nvme subsystem 00:20:46.130 treq: not specified, sq flow control disable supported 00:20:46.130 portid: 1 00:20:46.130 trsvcid: 4420 00:20:46.130 subnqn: nqn.2024-02.io.spdk:cnode0 00:20:46.130 traddr: 10.0.0.1 00:20:46.130 eflags: none 00:20:46.130 sectype: none 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: ]] 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.130 18:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.390 nvme0n1 00:20:46.390 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.390 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.390 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: ]] 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.391 nvme0n1 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.391 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.651 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.651 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.651 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.651 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.651 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.651 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.651 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:46.651 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.651 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:46.651 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:46.651 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:46.651 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:20:46.651 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:20:46.651 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:46.651 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:46.651 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:20:46.651 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: ]] 00:20:46.651 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:20:46.651 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:20:46.651 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.651 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:46.651 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:46.651 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:46.651 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.651 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:46.651 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.652 nvme0n1 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: ]] 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.652 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.911 nvme0n1 00:20:46.911 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.911 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.911 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.911 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.911 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.911 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.911 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.911 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.911 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.911 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.911 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.911 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.911 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:20:46.911 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.911 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:46.911 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:46.911 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:46.911 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:20:46.911 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:20:46.911 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:46.911 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:46.911 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:20:46.911 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: ]] 00:20:46.912 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:20:46.912 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:20:46.912 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.912 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:46.912 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:46.912 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:46.912 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.912 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:46.912 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.912 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.912 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.912 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.912 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:46.912 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:46.912 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:46.912 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.912 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.912 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:46.912 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.912 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:46.912 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:46.912 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:46.912 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:46.912 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.912 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.171 nvme0n1 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.171 18:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.171 nvme0n1 00:20:47.171 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.171 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.171 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.171 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.171 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.171 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.171 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.171 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.171 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.171 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.429 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.429 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.429 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.429 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:20:47.429 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.429 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.429 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:47.429 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:47.429 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:20:47.429 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:20:47.429 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.429 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: ]] 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.686 nvme0n1 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.686 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.944 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.944 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.944 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:20:47.944 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.944 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.944 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:47.944 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:47.944 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:20:47.944 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:20:47.944 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.944 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:47.944 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:20:47.944 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: ]] 00:20:47.944 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:20:47.944 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:20:47.944 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.944 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.944 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:47.944 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:47.944 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.944 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:47.944 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.945 nvme0n1 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: ]] 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.945 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.203 nvme0n1 00:20:48.203 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.203 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.203 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.203 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.203 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.203 18:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: ]] 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:48.203 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:48.204 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:48.204 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:48.204 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.204 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.462 nvme0n1 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:48.462 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:48.463 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:48.463 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:48.463 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.463 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.721 nvme0n1 00:20:48.721 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.721 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.721 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.721 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.721 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.721 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.721 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.721 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.721 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.721 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.721 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.721 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:48.721 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.721 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:20:48.721 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.721 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:48.721 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:48.721 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:48.721 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:20:48.721 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:20:48.721 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:48.721 18:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:49.312 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:20:49.312 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: ]] 00:20:49.312 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:20:49.312 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:20:49.312 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:49.312 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:49.312 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:49.312 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:49.312 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.312 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:49.312 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.312 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.312 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.312 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.312 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:49.312 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:49.312 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:49.312 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.312 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.312 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:49.312 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.312 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:49.312 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:49.312 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:49.312 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.312 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.312 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.571 nvme0n1 00:20:49.571 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.571 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.571 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: ]] 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.572 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.832 nvme0n1 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: ]] 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.832 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.091 nvme0n1 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: ]] 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.091 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:50.092 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.092 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.092 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.092 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.092 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:50.092 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:50.092 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:50.092 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.092 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.092 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:50.092 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.092 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:50.092 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:50.092 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:50.092 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:50.092 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.092 18:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.352 nvme0n1 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.352 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.612 nvme0n1 00:20:50.612 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.612 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.612 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.612 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.612 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.612 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.612 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.612 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.612 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.612 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.612 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.612 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:50.612 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.612 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:20:50.612 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.612 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:50.612 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:50.612 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:50.612 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:20:50.612 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:20:50.612 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:50.612 18:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:51.988 18:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:20:51.988 18:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: ]] 00:20:51.988 18:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:20:51.988 18:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:20:51.988 18:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:51.988 18:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:51.988 18:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:51.988 18:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:51.988 18:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.988 18:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:51.988 18:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.988 18:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.988 18:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.988 18:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:51.988 18:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:51.988 18:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:51.988 18:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:51.988 18:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.988 18:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.988 18:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:51.988 18:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.988 18:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:51.988 18:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:51.989 18:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:51.989 18:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.989 18:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.989 18:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.248 nvme0n1 00:20:52.248 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.248 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.248 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.248 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.248 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.248 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.248 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.248 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.248 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.248 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.248 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.248 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.248 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:52.248 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.248 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:52.248 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:52.248 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:52.248 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:20:52.248 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:20:52.248 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:52.248 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:52.249 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:20:52.249 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: ]] 00:20:52.249 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:20:52.249 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:20:52.249 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.249 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:52.249 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:52.249 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:52.249 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.249 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:52.249 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.249 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.249 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.249 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.249 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:52.249 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:52.249 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:52.249 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.249 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.249 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:52.249 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.249 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:52.249 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:52.249 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:52.249 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.249 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.249 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.816 nvme0n1 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: ]] 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.816 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.075 nvme0n1 00:20:53.075 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.075 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.075 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.075 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.075 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.075 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.075 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.075 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.075 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.075 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.075 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.075 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.075 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:53.075 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.075 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:53.075 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:53.075 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:53.075 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:20:53.075 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:20:53.075 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:53.075 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:53.075 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:20:53.075 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: ]] 00:20:53.076 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:20:53.076 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:20:53.076 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.076 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:53.076 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:53.076 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:53.076 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.076 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:53.076 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.076 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.076 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.076 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.076 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:53.076 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:53.076 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:53.076 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.076 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.076 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:53.076 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.076 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:53.076 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:53.076 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:53.076 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:53.076 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.076 18:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.334 nvme0n1 00:20:53.334 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.334 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.334 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.334 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.334 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.334 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.334 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.334 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.334 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.334 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.594 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.856 nvme0n1 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: ]] 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.856 18:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.503 nvme0n1 00:20:54.503 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.503 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.503 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.503 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.503 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.503 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.503 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.503 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: ]] 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.504 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.072 nvme0n1 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: ]] 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.072 18:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.641 nvme0n1 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: ]] 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.641 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.642 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.642 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.642 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:55.642 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:55.642 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:55.642 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.642 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.642 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:55.642 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.642 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:55.642 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:55.642 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:55.642 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:55.642 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.642 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.211 nvme0n1 00:20:56.211 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.211 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.211 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.211 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.211 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.211 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.211 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.211 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.211 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.211 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.211 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.211 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.211 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:56.211 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.211 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:56.211 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:56.211 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:56.211 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:20:56.211 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:56.211 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:56.211 18:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:56.211 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:20:56.211 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:56.211 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:20:56.211 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.211 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:56.211 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:56.211 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:56.211 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.211 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:56.211 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.211 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.211 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.211 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.211 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:56.211 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:56.211 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:56.211 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.211 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.211 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:56.211 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.211 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:56.211 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:56.211 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:56.211 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:56.211 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.211 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.779 nvme0n1 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: ]] 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:56.779 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:56.780 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.780 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.780 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:56.780 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.780 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:56.780 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:56.780 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:56.780 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.780 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.780 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.039 nvme0n1 00:20:57.039 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.039 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.039 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.039 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.039 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.039 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.039 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.039 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.039 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.039 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.039 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.039 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.039 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:57.039 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.039 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:57.039 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:57.039 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:57.039 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:20:57.039 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:20:57.039 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:57.039 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:57.039 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:20:57.039 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: ]] 00:20:57.039 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:20:57.039 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.040 nvme0n1 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.040 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: ]] 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.300 18:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.300 nvme0n1 00:20:57.300 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.300 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.300 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.300 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.300 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.300 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.300 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.300 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.300 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.300 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.300 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.300 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.300 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:57.300 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.300 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:57.300 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:57.300 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:57.300 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: ]] 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.301 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.560 nvme0n1 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.560 nvme0n1 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.560 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: ]] 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.819 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.820 nvme0n1 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: ]] 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.820 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.079 nvme0n1 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: ]] 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:58.079 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:58.080 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:58.080 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.080 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.080 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:58.080 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.080 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:58.080 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:58.080 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:58.080 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.080 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.080 18:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.339 nvme0n1 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: ]] 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.339 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.599 nvme0n1 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.599 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.600 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:58.600 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:58.600 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:58.600 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.600 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.600 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:58.600 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.600 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:58.600 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:58.600 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:58.600 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:58.600 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.600 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.600 nvme0n1 00:20:58.600 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.600 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.600 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.600 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.600 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.600 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.860 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.860 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: ]] 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.861 nvme0n1 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.861 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.120 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.120 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: ]] 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.121 18:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.121 nvme0n1 00:20:59.121 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.121 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.121 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.121 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.121 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.121 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.380 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.380 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.380 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.380 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.380 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.380 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.380 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:59.380 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.380 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:59.380 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:59.380 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:59.380 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:20:59.380 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:20:59.380 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:59.380 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: ]] 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.381 nvme0n1 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.381 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: ]] 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:59.641 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.642 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.642 nvme0n1 00:20:59.642 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.642 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.642 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.642 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.642 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.642 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.902 nvme0n1 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.902 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: ]] 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.162 18:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.421 nvme0n1 00:21:00.421 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.421 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.421 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.421 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.421 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.421 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.421 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.421 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.421 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.421 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.421 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.421 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.421 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:21:00.421 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.421 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: ]] 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.422 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.683 nvme0n1 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: ]] 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.683 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.251 nvme0n1 00:21:01.251 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.251 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: ]] 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.252 18:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.252 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.252 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.252 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:01.252 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:01.252 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:01.252 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.252 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.252 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:01.252 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.252 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:01.252 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:01.252 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:01.252 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:01.252 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.252 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.512 nvme0n1 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.512 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.078 nvme0n1 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: ]] 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.078 18:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.646 nvme0n1 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: ]] 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.646 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.214 nvme0n1 00:21:03.214 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.214 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: ]] 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.215 18:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.785 nvme0n1 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: ]] 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.785 18:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.353 nvme0n1 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.353 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.354 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:04.354 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.354 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:04.354 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:04.354 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:04.354 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:04.354 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.354 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.921 nvme0n1 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: ]] 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.921 nvme0n1 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.921 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.180 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.180 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.180 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.180 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.180 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.180 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.180 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.180 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:21:05.180 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.180 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:05.180 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:05.180 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:05.180 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:21:05.180 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:21:05.180 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:05.180 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:05.180 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:21:05.180 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: ]] 00:21:05.180 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:21:05.180 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:21:05.180 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.180 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:05.180 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:05.180 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:05.180 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.180 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:05.181 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.181 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.181 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.181 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.181 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:05.181 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:05.181 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:05.181 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.181 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.181 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:05.181 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.181 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:05.181 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:05.181 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:05.181 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.181 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.181 18:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.181 nvme0n1 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: ]] 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.181 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.439 nvme0n1 00:21:05.439 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.439 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.439 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.439 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.439 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.439 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.439 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.439 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.439 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.439 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.439 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.439 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.439 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:21:05.439 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.439 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:05.439 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:05.439 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:05.439 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:21:05.439 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: ]] 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.440 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.699 nvme0n1 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.699 nvme0n1 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: ]] 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.699 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.958 nvme0n1 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: ]] 00:21:05.958 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:21:05.959 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:21:05.959 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.959 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:05.959 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:05.959 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:05.959 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.959 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:05.959 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.959 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.959 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.959 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.959 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:05.959 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:05.959 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:05.959 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.959 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.959 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:05.959 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.959 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:05.959 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:05.959 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:05.959 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.959 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.959 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.217 nvme0n1 00:21:06.217 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.217 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.217 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.217 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.217 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.217 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.217 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.217 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.217 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.217 18:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.217 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.217 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.217 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:21:06.217 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.217 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:06.217 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:06.217 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:06.217 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: ]] 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.218 nvme0n1 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.218 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: ]] 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.476 nvme0n1 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.476 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.477 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.477 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.477 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.477 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:21:06.477 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.477 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:06.477 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:06.477 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:06.477 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:21:06.477 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:06.477 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.477 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:06.477 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:21:06.477 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:06.477 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:21:06.477 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.477 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:06.477 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:06.477 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:06.477 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.477 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:06.477 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.477 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.735 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.735 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.735 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:06.735 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:06.735 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:06.735 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.735 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.735 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:06.735 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.735 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:06.735 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:06.735 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:06.735 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:06.735 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.735 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.735 nvme0n1 00:21:06.735 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.735 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.735 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.735 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.735 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: ]] 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.736 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.994 nvme0n1 00:21:06.994 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.994 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.994 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: ]] 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.995 18:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.254 nvme0n1 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: ]] 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:07.254 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:07.255 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:07.255 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.255 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.255 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:07.255 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.255 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:07.255 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:07.255 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:07.255 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.255 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.255 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.513 nvme0n1 00:21:07.513 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.513 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.513 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.513 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.513 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.513 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.513 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.513 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.513 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.513 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.513 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.513 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.513 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: ]] 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.514 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.773 nvme0n1 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:07.773 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.774 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.032 nvme0n1 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: ]] 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.032 18:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.291 nvme0n1 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: ]] 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.291 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:08.292 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:08.292 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:08.292 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.292 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.292 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:08.292 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.292 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:08.292 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:08.292 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:08.292 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.292 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.292 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.859 nvme0n1 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: ]] 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.859 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.118 nvme0n1 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: ]] 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.118 18:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.377 nvme0n1 00:21:09.377 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.377 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:09.377 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.377 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:09.377 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.377 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.377 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.377 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:09.377 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.377 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.635 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.895 nvme0n1 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM3MzJiZjVlNTRmY2Y5NDlmNjQ5YjBlMDA0MGZlZTXoVvIn: 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: ]] 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODYxYzc4MzkyZjgxZjZmOTkwOGExMzNkM2JjYzI5YjFiOGZhYjM0YmMyNmZlODAxNzczZGI5NWE4NWRjNzdiY1D83/A=: 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.895 18:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.461 nvme0n1 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: ]] 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.461 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.028 nvme0n1 00:21:11.028 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.028 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.028 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:11.028 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.028 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.028 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.028 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.028 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.028 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.028 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.028 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.028 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:11.028 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:21:11.028 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:11.028 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:11.028 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:11.028 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:11.028 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: ]] 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.029 18:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.597 nvme0n1 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGY2MDliNTNmMWRjMDU2YTY2YTIxMjQzMDFmMjVmZDFiNTliMWNmOWQzM2QxNDZmzUQKrg==: 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: ]] 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY2ZWMxMDE5OGFiMTYwMDNmZWE3NjVhYjQ0NzM1NTFzRidx: 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.597 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.165 nvme0n1 00:21:12.165 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.165 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.165 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.165 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:12.165 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.165 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.165 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.165 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:12.165 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.165 18:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ4MzA0M2I3YTk5Mjg1YThiNjI1ZGRkNzg3ZTZmOTc1ZTZlNjI0YjFkM2YyZWUyYjE4YTA1OGJkNmQ3MjMxYzlDmJg=: 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.165 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.733 nvme0n1 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: ]] 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.733 request: 00:21:12.733 { 00:21:12.733 "name": "nvme0", 00:21:12.733 "trtype": "tcp", 00:21:12.733 "traddr": "10.0.0.1", 00:21:12.733 "adrfam": "ipv4", 00:21:12.733 "trsvcid": "4420", 00:21:12.733 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:12.733 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:12.733 "prchk_reftag": false, 00:21:12.733 "prchk_guard": false, 00:21:12.733 "hdgst": false, 00:21:12.733 "ddgst": false, 00:21:12.733 "allow_unrecognized_csi": false, 00:21:12.733 "method": "bdev_nvme_attach_controller", 00:21:12.733 "req_id": 1 00:21:12.733 } 00:21:12.733 Got JSON-RPC error response 00:21:12.733 response: 00:21:12.733 { 00:21:12.733 "code": -5, 00:21:12.733 "message": "Input/output error" 00:21:12.733 } 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.733 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.993 request: 00:21:12.993 { 00:21:12.993 "name": "nvme0", 00:21:12.993 "trtype": "tcp", 00:21:12.993 "traddr": "10.0.0.1", 00:21:12.993 "adrfam": "ipv4", 00:21:12.993 "trsvcid": "4420", 00:21:12.993 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:12.993 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:12.993 "prchk_reftag": false, 00:21:12.993 "prchk_guard": false, 00:21:12.993 "hdgst": false, 00:21:12.993 "ddgst": false, 00:21:12.993 "dhchap_key": "key2", 00:21:12.993 "allow_unrecognized_csi": false, 00:21:12.993 "method": "bdev_nvme_attach_controller", 00:21:12.993 "req_id": 1 00:21:12.993 } 00:21:12.993 Got JSON-RPC error response 00:21:12.993 response: 00:21:12.993 { 00:21:12.993 "code": -5, 00:21:12.993 "message": "Input/output error" 00:21:12.993 } 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:12.993 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.994 request: 00:21:12.994 { 00:21:12.994 "name": "nvme0", 00:21:12.994 "trtype": "tcp", 00:21:12.994 "traddr": "10.0.0.1", 00:21:12.994 "adrfam": "ipv4", 00:21:12.994 "trsvcid": "4420", 00:21:12.994 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:12.994 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:12.994 "prchk_reftag": false, 00:21:12.994 "prchk_guard": false, 00:21:12.994 "hdgst": false, 00:21:12.994 "ddgst": false, 00:21:12.994 "dhchap_key": "key1", 00:21:12.994 "dhchap_ctrlr_key": "ckey2", 00:21:12.994 "allow_unrecognized_csi": false, 00:21:12.994 "method": "bdev_nvme_attach_controller", 00:21:12.994 "req_id": 1 00:21:12.994 } 00:21:12.994 Got JSON-RPC error response 00:21:12.994 response: 00:21:12.994 { 00:21:12.994 "code": -5, 00:21:12.994 "message": "Input/output error" 00:21:12.994 } 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.994 nvme0n1 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: ]] 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.994 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.253 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.253 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.253 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:21:13.253 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.253 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.253 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.253 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.253 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:13.253 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:13.253 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:13.253 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:13.253 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:13.253 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:13.253 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:13.253 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:13.253 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.253 18:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.253 request: 00:21:13.253 { 00:21:13.253 "name": "nvme0", 00:21:13.253 "dhchap_key": "key1", 00:21:13.253 "dhchap_ctrlr_key": "ckey2", 00:21:13.253 "method": "bdev_nvme_set_keys", 00:21:13.253 "req_id": 1 00:21:13.253 } 00:21:13.253 Got JSON-RPC error response 00:21:13.253 response: 00:21:13.253 { 00:21:13.253 "code": -13, 00:21:13.253 "message": "Permission denied" 00:21:13.253 } 00:21:13.253 18:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:13.253 18:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:13.253 18:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:13.253 18:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:13.253 18:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:13.253 18:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.253 18:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.253 18:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.253 18:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:13.253 18:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.253 18:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:21:13.253 18:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:21:14.188 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:14.188 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.188 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.188 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.188 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.447 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:21:14.447 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y1ZGNhZGU0NTM3NzBiYzVlNjBkMWU3ZDViOGNjMzYwMGQ3ZGE0OTgyNjRkY2Qz3s/fqA==: 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: ]] 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDgzNzVmZjBmNmE3ZTgzOGRkNzNkODg1ZjA2N2ZkNmY4ZDliZWNlNjhlMmYzN2RjSeuauQ==: 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.448 nvme0n1 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczODcwOWM4MzJkOTc3OGM0NTAyZTU3NGRjZjlmZmaliKjE: 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: ]] 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI0YWM2OGZlMTExN2QzMGU3OTZhYzE3ZWMxZDA3NznE8Rll: 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.448 request: 00:21:14.448 { 00:21:14.448 "name": "nvme0", 00:21:14.448 "dhchap_key": "key2", 00:21:14.448 "dhchap_ctrlr_key": "ckey1", 00:21:14.448 "method": "bdev_nvme_set_keys", 00:21:14.448 "req_id": 1 00:21:14.448 } 00:21:14.448 Got JSON-RPC error response 00:21:14.448 response: 00:21:14.448 { 00:21:14.448 "code": -13, 00:21:14.448 "message": "Permission denied" 00:21:14.448 } 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:21:14.448 18:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:15.827 rmmod nvme_tcp 00:21:15.827 rmmod nvme_fabrics 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@513 -- # '[' -n 92848 ']' 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # killprocess 92848 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 92848 ']' 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 92848 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92848 00:21:15.827 killing process with pid 92848 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92848' 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 92848 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 92848 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-save 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-restore 00:21:15.827 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:15.828 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:15.828 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:15.828 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:15.828 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:15.828 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:15.828 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:15.828 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:15.828 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:15.828 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:15.828 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:16.088 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:16.088 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:16.088 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:16.088 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:16.088 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:16.088 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.088 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.088 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.088 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:21:16.088 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:16.088 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:16.088 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:21:16.088 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:21:16.088 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # echo 0 00:21:16.088 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:16.088 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:16.088 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:16.088 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:16.088 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:21:16.088 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:21:16.088 18:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:17.027 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:17.028 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:17.028 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:17.028 18:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.dfH /tmp/spdk.key-null.e4b /tmp/spdk.key-sha256.uEl /tmp/spdk.key-sha384.9bZ /tmp/spdk.key-sha512.FUq /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:21:17.028 18:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:17.594 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:17.594 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:17.594 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:17.594 00:21:17.594 real 0m35.068s 00:21:17.594 user 0m32.311s 00:21:17.594 sys 0m3.952s 00:21:17.594 18:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:17.594 ************************************ 00:21:17.594 END TEST nvmf_auth_host 00:21:17.594 ************************************ 00:21:17.594 18:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.594 18:38:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:21:17.594 18:38:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:17.594 18:38:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:17.594 18:38:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:17.594 18:38:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.594 ************************************ 00:21:17.594 START TEST nvmf_digest 00:21:17.594 ************************************ 00:21:17.594 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:17.594 * Looking for test storage... 00:21:17.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:17.594 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:17.594 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:21:17.594 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:17.853 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:17.853 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:17.853 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:17.853 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:17.853 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:21:17.853 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:21:17.853 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:21:17.853 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:21:17.853 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:21:17.853 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:21:17.853 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:21:17.853 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:17.853 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:21:17.853 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:21:17.853 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:17.853 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:17.853 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:21:17.853 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:21:17.853 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:17.853 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:21:17.853 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:21:17.853 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:21:17.853 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:21:17.853 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:17.853 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:21:17.853 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:21:17.853 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:17.853 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:17.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.854 --rc genhtml_branch_coverage=1 00:21:17.854 --rc genhtml_function_coverage=1 00:21:17.854 --rc genhtml_legend=1 00:21:17.854 --rc geninfo_all_blocks=1 00:21:17.854 --rc geninfo_unexecuted_blocks=1 00:21:17.854 00:21:17.854 ' 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:17.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.854 --rc genhtml_branch_coverage=1 00:21:17.854 --rc genhtml_function_coverage=1 00:21:17.854 --rc genhtml_legend=1 00:21:17.854 --rc geninfo_all_blocks=1 00:21:17.854 --rc geninfo_unexecuted_blocks=1 00:21:17.854 00:21:17.854 ' 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:17.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.854 --rc genhtml_branch_coverage=1 00:21:17.854 --rc genhtml_function_coverage=1 00:21:17.854 --rc genhtml_legend=1 00:21:17.854 --rc geninfo_all_blocks=1 00:21:17.854 --rc geninfo_unexecuted_blocks=1 00:21:17.854 00:21:17.854 ' 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:17.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.854 --rc genhtml_branch_coverage=1 00:21:17.854 --rc genhtml_function_coverage=1 00:21:17.854 --rc genhtml_legend=1 00:21:17.854 --rc geninfo_all_blocks=1 00:21:17.854 --rc geninfo_unexecuted_blocks=1 00:21:17.854 00:21:17.854 ' 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:17.854 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@456 -- # nvmf_veth_init 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:17.854 Cannot find device "nvmf_init_br" 00:21:17.854 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:21:17.855 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:17.855 Cannot find device "nvmf_init_br2" 00:21:17.855 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:21:17.855 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:17.855 Cannot find device "nvmf_tgt_br" 00:21:17.855 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:21:17.855 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:17.855 Cannot find device "nvmf_tgt_br2" 00:21:17.855 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:21:17.855 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:17.855 Cannot find device "nvmf_init_br" 00:21:17.855 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:21:17.855 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:17.855 Cannot find device "nvmf_init_br2" 00:21:17.855 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:21:17.855 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:17.855 Cannot find device "nvmf_tgt_br" 00:21:17.855 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:21:17.855 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:17.855 Cannot find device "nvmf_tgt_br2" 00:21:17.855 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:21:17.855 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:17.855 Cannot find device "nvmf_br" 00:21:17.855 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:21:17.855 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:17.855 Cannot find device "nvmf_init_if" 00:21:17.855 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:21:17.855 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:17.855 Cannot find device "nvmf_init_if2" 00:21:17.855 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:21:17.855 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:17.855 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:17.855 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:21:17.855 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:17.855 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:17.855 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:21:17.855 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:17.855 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:18.114 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:18.114 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:21:18.114 00:21:18.114 --- 10.0.0.3 ping statistics --- 00:21:18.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.114 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:18.114 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:18.114 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:21:18.114 00:21:18.114 --- 10.0.0.4 ping statistics --- 00:21:18.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.114 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:21:18.114 18:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:18.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:18.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:21:18.114 00:21:18.114 --- 10.0.0.1 ping statistics --- 00:21:18.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.114 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:21:18.114 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:18.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:18.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:21:18.114 00:21:18.114 --- 10.0.0.2 ping statistics --- 00:21:18.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.114 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:21:18.114 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:18.114 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@457 -- # return 0 00:21:18.114 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:18.114 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:18.114 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:18.114 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:18.114 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:18.114 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:18.114 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:18.114 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:18.114 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:21:18.114 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:21:18.114 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:18.114 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:18.114 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:18.114 ************************************ 00:21:18.114 START TEST nvmf_digest_clean 00:21:18.114 ************************************ 00:21:18.114 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:21:18.114 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:21:18.114 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:21:18.114 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:21:18.114 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:21:18.114 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:21:18.114 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:18.114 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:18.114 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:18.373 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # nvmfpid=94492 00:21:18.373 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # waitforlisten 94492 00:21:18.373 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 94492 ']' 00:21:18.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.373 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:18.373 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.373 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:18.373 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.373 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:18.373 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:18.373 [2024-12-08 18:38:36.107220] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:18.373 [2024-12-08 18:38:36.107311] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.373 [2024-12-08 18:38:36.249832] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.631 [2024-12-08 18:38:36.322034] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.631 [2024-12-08 18:38:36.322108] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.631 [2024-12-08 18:38:36.322124] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.631 [2024-12-08 18:38:36.322134] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.631 [2024-12-08 18:38:36.322144] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.631 [2024-12-08 18:38:36.322180] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.631 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:18.631 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:18.631 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:18.631 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:18.631 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:18.631 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.632 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:21:18.632 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:21:18.632 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:21:18.632 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.632 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:18.632 [2024-12-08 18:38:36.474634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:18.632 null0 00:21:18.632 [2024-12-08 18:38:36.527960] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.632 [2024-12-08 18:38:36.552098] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:18.632 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.632 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:21:18.632 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:18.632 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:18.632 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:18.632 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:18.632 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:18.632 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:18.632 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94517 00:21:18.632 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:18.891 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94517 /var/tmp/bperf.sock 00:21:18.891 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 94517 ']' 00:21:18.891 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:18.891 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:18.891 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:18.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:18.891 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:18.891 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:18.891 [2024-12-08 18:38:36.619351] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:18.891 [2024-12-08 18:38:36.619672] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94517 ] 00:21:18.891 [2024-12-08 18:38:36.758400] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.149 [2024-12-08 18:38:36.821377] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.149 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:19.150 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:19.150 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:19.150 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:19.150 18:38:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:19.408 [2024-12-08 18:38:37.199654] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:19.408 18:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:19.408 18:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:19.667 nvme0n1 00:21:19.667 18:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:19.667 18:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:19.926 Running I/O for 2 seconds... 00:21:21.796 18542.00 IOPS, 72.43 MiB/s [2024-12-08T18:38:39.726Z] 18542.00 IOPS, 72.43 MiB/s 00:21:21.796 Latency(us) 00:21:21.796 [2024-12-08T18:38:39.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.796 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:21.796 nvme0n1 : 2.00 18562.11 72.51 0.00 0.00 6891.51 6583.39 19899.11 00:21:21.796 [2024-12-08T18:38:39.726Z] =================================================================================================================== 00:21:21.796 [2024-12-08T18:38:39.726Z] Total : 18562.11 72.51 0.00 0.00 6891.51 6583.39 19899.11 00:21:21.796 { 00:21:21.796 "results": [ 00:21:21.796 { 00:21:21.796 "job": "nvme0n1", 00:21:21.796 "core_mask": "0x2", 00:21:21.796 "workload": "randread", 00:21:21.796 "status": "finished", 00:21:21.796 "queue_depth": 128, 00:21:21.796 "io_size": 4096, 00:21:21.796 "runtime": 2.004729, 00:21:21.796 "iops": 18562.109891162345, 00:21:21.796 "mibps": 72.50824176235291, 00:21:21.796 "io_failed": 0, 00:21:21.796 "io_timeout": 0, 00:21:21.796 "avg_latency_us": 6891.512695220506, 00:21:21.796 "min_latency_us": 6583.389090909091, 00:21:21.796 "max_latency_us": 19899.112727272728 00:21:21.796 } 00:21:21.796 ], 00:21:21.796 "core_count": 1 00:21:21.796 } 00:21:21.796 18:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:21.796 18:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:21.796 18:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:21.796 18:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:21.796 | select(.opcode=="crc32c") 00:21:21.796 | "\(.module_name) \(.executed)"' 00:21:21.796 18:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:22.056 18:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:22.056 18:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:22.056 18:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:22.056 18:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:22.056 18:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94517 00:21:22.056 18:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 94517 ']' 00:21:22.056 18:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 94517 00:21:22.056 18:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:22.056 18:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:22.326 18:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94517 00:21:22.326 killing process with pid 94517 00:21:22.326 Received shutdown signal, test time was about 2.000000 seconds 00:21:22.326 00:21:22.326 Latency(us) 00:21:22.326 [2024-12-08T18:38:40.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.327 [2024-12-08T18:38:40.257Z] =================================================================================================================== 00:21:22.327 [2024-12-08T18:38:40.257Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:22.327 18:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:22.327 18:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:22.327 18:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94517' 00:21:22.327 18:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 94517 00:21:22.327 18:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 94517 00:21:22.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:22.630 18:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:21:22.630 18:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:22.630 18:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:22.630 18:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:22.630 18:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:22.630 18:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:22.630 18:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:22.630 18:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94571 00:21:22.630 18:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94571 /var/tmp/bperf.sock 00:21:22.630 18:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 94571 ']' 00:21:22.630 18:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:22.630 18:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:22.630 18:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:22.630 18:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:22.631 18:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:22.631 18:38:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:22.631 [2024-12-08 18:38:40.334849] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:22.631 [2024-12-08 18:38:40.335147] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:21:22.631 Zero copy mechanism will not be used. 00:21:22.631 llocations --file-prefix=spdk_pid94571 ] 00:21:22.631 [2024-12-08 18:38:40.473419] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.631 [2024-12-08 18:38:40.534951] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.581 18:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:23.581 18:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:23.581 18:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:23.581 18:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:23.581 18:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:23.840 [2024-12-08 18:38:41.529721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:23.840 18:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:23.840 18:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:24.098 nvme0n1 00:21:24.098 18:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:24.098 18:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:24.098 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:24.098 Zero copy mechanism will not be used. 00:21:24.098 Running I/O for 2 seconds... 00:21:26.407 7872.00 IOPS, 984.00 MiB/s [2024-12-08T18:38:44.337Z] 7872.00 IOPS, 984.00 MiB/s 00:21:26.407 Latency(us) 00:21:26.407 [2024-12-08T18:38:44.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.407 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:26.407 nvme0n1 : 2.00 7871.17 983.90 0.00 0.00 2029.90 1899.05 3336.38 00:21:26.407 [2024-12-08T18:38:44.337Z] =================================================================================================================== 00:21:26.407 [2024-12-08T18:38:44.337Z] Total : 7871.17 983.90 0.00 0.00 2029.90 1899.05 3336.38 00:21:26.407 { 00:21:26.407 "results": [ 00:21:26.407 { 00:21:26.407 "job": "nvme0n1", 00:21:26.407 "core_mask": "0x2", 00:21:26.407 "workload": "randread", 00:21:26.407 "status": "finished", 00:21:26.407 "queue_depth": 16, 00:21:26.407 "io_size": 131072, 00:21:26.407 "runtime": 2.002244, 00:21:26.407 "iops": 7871.168548888148, 00:21:26.407 "mibps": 983.8960686110184, 00:21:26.407 "io_failed": 0, 00:21:26.407 "io_timeout": 0, 00:21:26.407 "avg_latency_us": 2029.89641716659, 00:21:26.407 "min_latency_us": 1899.0545454545454, 00:21:26.407 "max_latency_us": 3336.378181818182 00:21:26.408 } 00:21:26.408 ], 00:21:26.408 "core_count": 1 00:21:26.408 } 00:21:26.408 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:26.408 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:26.408 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:26.408 | select(.opcode=="crc32c") 00:21:26.408 | "\(.module_name) \(.executed)"' 00:21:26.408 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:26.408 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:26.408 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:26.408 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:26.408 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:26.408 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:26.408 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94571 00:21:26.408 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 94571 ']' 00:21:26.408 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 94571 00:21:26.408 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:26.408 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:26.408 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94571 00:21:26.408 killing process with pid 94571 00:21:26.408 Received shutdown signal, test time was about 2.000000 seconds 00:21:26.408 00:21:26.408 Latency(us) 00:21:26.408 [2024-12-08T18:38:44.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.408 [2024-12-08T18:38:44.338Z] =================================================================================================================== 00:21:26.408 [2024-12-08T18:38:44.338Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:26.408 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:26.408 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:26.408 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94571' 00:21:26.408 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 94571 00:21:26.408 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 94571 00:21:26.722 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:21:26.722 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:26.722 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:26.722 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:26.722 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:26.722 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:26.722 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:26.722 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94627 00:21:26.722 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94627 /var/tmp/bperf.sock 00:21:26.722 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:26.722 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 94627 ']' 00:21:26.722 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:26.722 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:26.722 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:26.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:26.722 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:26.722 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:26.722 [2024-12-08 18:38:44.633510] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:26.722 [2024-12-08 18:38:44.633794] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94627 ] 00:21:26.981 [2024-12-08 18:38:44.769767] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.981 [2024-12-08 18:38:44.825003] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.981 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:26.981 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:26.981 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:26.981 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:26.981 18:38:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:27.549 [2024-12-08 18:38:45.191079] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:27.549 18:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:27.549 18:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:27.807 nvme0n1 00:21:27.807 18:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:27.807 18:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:27.807 Running I/O for 2 seconds... 00:21:29.750 20067.00 IOPS, 78.39 MiB/s [2024-12-08T18:38:47.938Z] 20130.00 IOPS, 78.63 MiB/s 00:21:30.008 Latency(us) 00:21:30.008 [2024-12-08T18:38:47.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.008 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:30.009 nvme0n1 : 2.01 20095.77 78.50 0.00 0.00 6363.30 1906.50 15192.44 00:21:30.009 [2024-12-08T18:38:47.939Z] =================================================================================================================== 00:21:30.009 [2024-12-08T18:38:47.939Z] Total : 20095.77 78.50 0.00 0.00 6363.30 1906.50 15192.44 00:21:30.009 { 00:21:30.009 "results": [ 00:21:30.009 { 00:21:30.009 "job": "nvme0n1", 00:21:30.009 "core_mask": "0x2", 00:21:30.009 "workload": "randwrite", 00:21:30.009 "status": "finished", 00:21:30.009 "queue_depth": 128, 00:21:30.009 "io_size": 4096, 00:21:30.009 "runtime": 2.009776, 00:21:30.009 "iops": 20095.771867113548, 00:21:30.009 "mibps": 78.4991088559123, 00:21:30.009 "io_failed": 0, 00:21:30.009 "io_timeout": 0, 00:21:30.009 "avg_latency_us": 6363.3016035366045, 00:21:30.009 "min_latency_us": 1906.5018181818182, 00:21:30.009 "max_latency_us": 15192.436363636363 00:21:30.009 } 00:21:30.009 ], 00:21:30.009 "core_count": 1 00:21:30.009 } 00:21:30.009 18:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:30.009 18:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:30.009 18:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:30.009 18:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:30.009 18:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:30.009 | select(.opcode=="crc32c") 00:21:30.009 | "\(.module_name) \(.executed)"' 00:21:30.268 18:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:30.268 18:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:30.268 18:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:30.268 18:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:30.268 18:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94627 00:21:30.268 18:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 94627 ']' 00:21:30.268 18:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 94627 00:21:30.268 18:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:30.268 18:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:30.268 18:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94627 00:21:30.268 18:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:30.268 18:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:30.268 killing process with pid 94627 00:21:30.268 Received shutdown signal, test time was about 2.000000 seconds 00:21:30.268 00:21:30.268 Latency(us) 00:21:30.268 [2024-12-08T18:38:48.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.268 [2024-12-08T18:38:48.198Z] =================================================================================================================== 00:21:30.268 [2024-12-08T18:38:48.198Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:30.268 18:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94627' 00:21:30.268 18:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 94627 00:21:30.268 18:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 94627 00:21:30.528 18:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:21:30.528 18:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:30.528 18:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:30.528 18:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:30.528 18:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:30.528 18:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:30.528 18:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:30.528 18:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94681 00:21:30.528 18:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94681 /var/tmp/bperf.sock 00:21:30.528 18:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:30.528 18:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 94681 ']' 00:21:30.528 18:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:30.528 18:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:30.528 18:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:30.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:30.528 18:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:30.528 18:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:30.528 [2024-12-08 18:38:48.309472] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:30.528 [2024-12-08 18:38:48.309768] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94681 ] 00:21:30.528 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:30.528 Zero copy mechanism will not be used. 00:21:30.528 [2024-12-08 18:38:48.443287] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.788 [2024-12-08 18:38:48.501688] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.356 18:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:31.356 18:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:31.356 18:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:31.356 18:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:31.356 18:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:31.615 [2024-12-08 18:38:49.534252] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:31.874 18:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:31.874 18:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:32.133 nvme0n1 00:21:32.133 18:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:32.133 18:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:32.133 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:32.133 Zero copy mechanism will not be used. 00:21:32.133 Running I/O for 2 seconds... 00:21:34.455 6697.00 IOPS, 837.12 MiB/s [2024-12-08T18:38:52.385Z] 6690.00 IOPS, 836.25 MiB/s 00:21:34.455 Latency(us) 00:21:34.455 [2024-12-08T18:38:52.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.455 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:34.455 nvme0n1 : 2.00 6688.48 836.06 0.00 0.00 2386.98 1653.29 5868.45 00:21:34.455 [2024-12-08T18:38:52.385Z] =================================================================================================================== 00:21:34.455 [2024-12-08T18:38:52.385Z] Total : 6688.48 836.06 0.00 0.00 2386.98 1653.29 5868.45 00:21:34.455 { 00:21:34.455 "results": [ 00:21:34.455 { 00:21:34.455 "job": "nvme0n1", 00:21:34.455 "core_mask": "0x2", 00:21:34.455 "workload": "randwrite", 00:21:34.455 "status": "finished", 00:21:34.455 "queue_depth": 16, 00:21:34.455 "io_size": 131072, 00:21:34.455 "runtime": 2.003744, 00:21:34.455 "iops": 6688.479166999377, 00:21:34.455 "mibps": 836.0598958749222, 00:21:34.455 "io_failed": 0, 00:21:34.455 "io_timeout": 0, 00:21:34.455 "avg_latency_us": 2386.9809390728656, 00:21:34.455 "min_latency_us": 1653.2945454545454, 00:21:34.455 "max_latency_us": 5868.450909090909 00:21:34.455 } 00:21:34.455 ], 00:21:34.455 "core_count": 1 00:21:34.456 } 00:21:34.456 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:34.456 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:34.456 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:34.456 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:34.456 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:34.456 | select(.opcode=="crc32c") 00:21:34.456 | "\(.module_name) \(.executed)"' 00:21:34.456 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:34.456 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:34.456 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:34.456 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:34.456 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94681 00:21:34.456 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 94681 ']' 00:21:34.456 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 94681 00:21:34.456 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:34.456 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:34.456 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94681 00:21:34.456 killing process with pid 94681 00:21:34.456 Received shutdown signal, test time was about 2.000000 seconds 00:21:34.456 00:21:34.456 Latency(us) 00:21:34.456 [2024-12-08T18:38:52.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.456 [2024-12-08T18:38:52.386Z] =================================================================================================================== 00:21:34.456 [2024-12-08T18:38:52.386Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:34.456 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:34.456 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:34.456 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94681' 00:21:34.456 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 94681 00:21:34.456 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 94681 00:21:34.716 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 94492 00:21:34.716 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 94492 ']' 00:21:34.716 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 94492 00:21:34.716 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:34.716 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:34.716 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94492 00:21:34.716 killing process with pid 94492 00:21:34.716 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:34.716 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:34.716 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94492' 00:21:34.716 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 94492 00:21:34.716 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 94492 00:21:34.975 00:21:34.975 real 0m16.692s 00:21:34.975 user 0m31.372s 00:21:34.975 sys 0m5.558s 00:21:34.975 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:34.975 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:34.975 ************************************ 00:21:34.975 END TEST nvmf_digest_clean 00:21:34.975 ************************************ 00:21:34.975 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:21:34.975 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:34.975 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:34.975 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:34.975 ************************************ 00:21:34.975 START TEST nvmf_digest_error 00:21:34.975 ************************************ 00:21:34.975 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:21:34.975 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:21:34.975 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:34.975 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:34.975 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:34.975 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # nvmfpid=94770 00:21:34.975 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:34.975 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # waitforlisten 94770 00:21:34.975 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 94770 ']' 00:21:34.975 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.975 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:34.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.975 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.975 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:34.976 18:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:34.976 [2024-12-08 18:38:52.836658] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:34.976 [2024-12-08 18:38:52.836732] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.235 [2024-12-08 18:38:52.968594] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.235 [2024-12-08 18:38:53.027520] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.235 [2024-12-08 18:38:53.027877] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.235 [2024-12-08 18:38:53.028104] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.235 [2024-12-08 18:38:53.028233] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.235 [2024-12-08 18:38:53.028267] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.235 [2024-12-08 18:38:53.028323] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.173 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:36.173 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:36.173 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:36.173 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:36.173 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:36.173 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.173 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:21:36.173 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.173 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:36.173 [2024-12-08 18:38:53.824898] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:21:36.173 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.173 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:21:36.173 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:21:36.173 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.173 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:36.173 [2024-12-08 18:38:53.884566] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:36.173 null0 00:21:36.174 [2024-12-08 18:38:53.930099] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.174 [2024-12-08 18:38:53.954227] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:36.174 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.174 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:21:36.174 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:36.174 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:36.174 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:36.174 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:36.174 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94802 00:21:36.174 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:21:36.174 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94802 /var/tmp/bperf.sock 00:21:36.174 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 94802 ']' 00:21:36.174 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:36.174 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:36.174 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:36.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:36.174 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:36.174 18:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:36.174 [2024-12-08 18:38:54.015139] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:36.174 [2024-12-08 18:38:54.015443] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94802 ] 00:21:36.433 [2024-12-08 18:38:54.154189] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.433 [2024-12-08 18:38:54.225652] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.434 [2024-12-08 18:38:54.294592] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:36.434 18:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:36.434 18:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:36.434 18:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:36.434 18:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:36.693 18:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:36.693 18:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.693 18:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:36.693 18:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.693 18:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:36.693 18:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:37.262 nvme0n1 00:21:37.262 18:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:37.263 18:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.263 18:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:37.263 18:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.263 18:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:37.263 18:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:37.263 Running I/O for 2 seconds... 00:21:37.263 [2024-12-08 18:38:55.070715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.263 [2024-12-08 18:38:55.070770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.263 [2024-12-08 18:38:55.070785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.263 [2024-12-08 18:38:55.084492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.263 [2024-12-08 18:38:55.084527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.263 [2024-12-08 18:38:55.084540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.263 [2024-12-08 18:38:55.097951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.263 [2024-12-08 18:38:55.098127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.263 [2024-12-08 18:38:55.098143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.263 [2024-12-08 18:38:55.111674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.263 [2024-12-08 18:38:55.111708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.263 [2024-12-08 18:38:55.111721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.263 [2024-12-08 18:38:55.125193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.263 [2024-12-08 18:38:55.125226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.263 [2024-12-08 18:38:55.125238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.263 [2024-12-08 18:38:55.138659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.263 [2024-12-08 18:38:55.138692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.263 [2024-12-08 18:38:55.138705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.263 [2024-12-08 18:38:55.152077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.263 [2024-12-08 18:38:55.152242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.263 [2024-12-08 18:38:55.152258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.263 [2024-12-08 18:38:55.165716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.263 [2024-12-08 18:38:55.165751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.263 [2024-12-08 18:38:55.165763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.263 [2024-12-08 18:38:55.179193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.263 [2024-12-08 18:38:55.179358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.263 [2024-12-08 18:38:55.179374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.522 [2024-12-08 18:38:55.193457] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.522 [2024-12-08 18:38:55.193668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.522 [2024-12-08 18:38:55.193685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.522 [2024-12-08 18:38:55.207427] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.522 [2024-12-08 18:38:55.207460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.522 [2024-12-08 18:38:55.207472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.522 [2024-12-08 18:38:55.221034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.522 [2024-12-08 18:38:55.221066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.522 [2024-12-08 18:38:55.221078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.522 [2024-12-08 18:38:55.234495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.522 [2024-12-08 18:38:55.234634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.522 [2024-12-08 18:38:55.234652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.522 [2024-12-08 18:38:55.248180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.522 [2024-12-08 18:38:55.248214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.522 [2024-12-08 18:38:55.248226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.522 [2024-12-08 18:38:55.261611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.522 [2024-12-08 18:38:55.261750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.522 [2024-12-08 18:38:55.261765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.522 [2024-12-08 18:38:55.275258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.522 [2024-12-08 18:38:55.275292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.522 [2024-12-08 18:38:55.275304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.522 [2024-12-08 18:38:55.288796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.522 [2024-12-08 18:38:55.288938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.522 [2024-12-08 18:38:55.288955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.522 [2024-12-08 18:38:55.302482] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.522 [2024-12-08 18:38:55.302516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.522 [2024-12-08 18:38:55.302528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.522 [2024-12-08 18:38:55.316142] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.522 [2024-12-08 18:38:55.316281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.522 [2024-12-08 18:38:55.316300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.522 [2024-12-08 18:38:55.330140] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.522 [2024-12-08 18:38:55.330175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.522 [2024-12-08 18:38:55.330187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.522 [2024-12-08 18:38:55.343686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.522 [2024-12-08 18:38:55.343719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.522 [2024-12-08 18:38:55.343731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.522 [2024-12-08 18:38:55.357177] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.522 [2024-12-08 18:38:55.357210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.522 [2024-12-08 18:38:55.357222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.522 [2024-12-08 18:38:55.370640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.522 [2024-12-08 18:38:55.370673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.522 [2024-12-08 18:38:55.370684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.522 [2024-12-08 18:38:55.384085] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.522 [2024-12-08 18:38:55.384228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.522 [2024-12-08 18:38:55.384244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.522 [2024-12-08 18:38:55.397830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.522 [2024-12-08 18:38:55.397865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.522 [2024-12-08 18:38:55.397877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.522 [2024-12-08 18:38:55.411326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.522 [2024-12-08 18:38:55.411482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.522 [2024-12-08 18:38:55.411497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.522 [2024-12-08 18:38:55.424969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.522 [2024-12-08 18:38:55.425002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.522 [2024-12-08 18:38:55.425014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.522 [2024-12-08 18:38:55.438677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.522 [2024-12-08 18:38:55.438814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.522 [2024-12-08 18:38:55.438833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.781 [2024-12-08 18:38:55.452862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.781 [2024-12-08 18:38:55.453037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.781 [2024-12-08 18:38:55.453069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.781 [2024-12-08 18:38:55.466858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.781 [2024-12-08 18:38:55.466891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.781 [2024-12-08 18:38:55.466903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.781 [2024-12-08 18:38:55.480448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.781 [2024-12-08 18:38:55.480480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.781 [2024-12-08 18:38:55.480492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.781 [2024-12-08 18:38:55.494005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.781 [2024-12-08 18:38:55.494038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.781 [2024-12-08 18:38:55.494049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.781 [2024-12-08 18:38:55.507504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.781 [2024-12-08 18:38:55.507643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.781 [2024-12-08 18:38:55.507660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.781 [2024-12-08 18:38:55.521227] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.781 [2024-12-08 18:38:55.521261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.781 [2024-12-08 18:38:55.521272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.781 [2024-12-08 18:38:55.534771] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.781 [2024-12-08 18:38:55.534911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.781 [2024-12-08 18:38:55.534927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.781 [2024-12-08 18:38:55.548473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.781 [2024-12-08 18:38:55.548506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.781 [2024-12-08 18:38:55.548518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.781 [2024-12-08 18:38:55.561936] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.781 [2024-12-08 18:38:55.562082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.781 [2024-12-08 18:38:55.562097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.781 [2024-12-08 18:38:55.575621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.781 [2024-12-08 18:38:55.575654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.781 [2024-12-08 18:38:55.575665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.781 [2024-12-08 18:38:55.589195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.781 [2024-12-08 18:38:55.589230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.781 [2024-12-08 18:38:55.589242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.781 [2024-12-08 18:38:55.602745] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.781 [2024-12-08 18:38:55.602777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.781 [2024-12-08 18:38:55.602789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.781 [2024-12-08 18:38:55.616250] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.781 [2024-12-08 18:38:55.616390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.781 [2024-12-08 18:38:55.616433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.781 [2024-12-08 18:38:55.629952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.781 [2024-12-08 18:38:55.630105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.781 [2024-12-08 18:38:55.630279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.781 [2024-12-08 18:38:55.643901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.781 [2024-12-08 18:38:55.644073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.781 [2024-12-08 18:38:55.644226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.781 [2024-12-08 18:38:55.657874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.781 [2024-12-08 18:38:55.658026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.781 [2024-12-08 18:38:55.658200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.781 [2024-12-08 18:38:55.671697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.781 [2024-12-08 18:38:55.671872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.781 [2024-12-08 18:38:55.672027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.781 [2024-12-08 18:38:55.685741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.781 [2024-12-08 18:38:55.685892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.781 [2024-12-08 18:38:55.686044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.781 [2024-12-08 18:38:55.699616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:37.781 [2024-12-08 18:38:55.699769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.782 [2024-12-08 18:38:55.699942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.052 [2024-12-08 18:38:55.714497] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.052 [2024-12-08 18:38:55.714665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.052 [2024-12-08 18:38:55.714782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.052 [2024-12-08 18:38:55.728389] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.052 [2024-12-08 18:38:55.728587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.052 [2024-12-08 18:38:55.728712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.052 [2024-12-08 18:38:55.742284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.052 [2024-12-08 18:38:55.742469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.052 [2024-12-08 18:38:55.742593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.052 [2024-12-08 18:38:55.756112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.052 [2024-12-08 18:38:55.756281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.053 [2024-12-08 18:38:55.756391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.053 [2024-12-08 18:38:55.770084] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.053 [2024-12-08 18:38:55.770250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.053 [2024-12-08 18:38:55.770358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.053 [2024-12-08 18:38:55.783946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.053 [2024-12-08 18:38:55.784129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.053 [2024-12-08 18:38:55.784257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.053 [2024-12-08 18:38:55.797936] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.053 [2024-12-08 18:38:55.798090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.053 [2024-12-08 18:38:55.798242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.053 [2024-12-08 18:38:55.811870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.053 [2024-12-08 18:38:55.811905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.053 [2024-12-08 18:38:55.811917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.053 [2024-12-08 18:38:55.825384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.053 [2024-12-08 18:38:55.825553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.053 [2024-12-08 18:38:55.825569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.053 [2024-12-08 18:38:55.839560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.053 [2024-12-08 18:38:55.839711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.053 [2024-12-08 18:38:55.839727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.053 [2024-12-08 18:38:55.855829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.053 [2024-12-08 18:38:55.855871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.053 [2024-12-08 18:38:55.855883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.053 [2024-12-08 18:38:55.870937] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.053 [2024-12-08 18:38:55.871096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.053 [2024-12-08 18:38:55.871116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.053 [2024-12-08 18:38:55.885807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.053 [2024-12-08 18:38:55.885955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.053 [2024-12-08 18:38:55.886066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.053 [2024-12-08 18:38:55.899950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.053 [2024-12-08 18:38:55.900109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.053 [2024-12-08 18:38:55.900248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.053 [2024-12-08 18:38:55.914128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.053 [2024-12-08 18:38:55.914279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.053 [2024-12-08 18:38:55.914432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.053 [2024-12-08 18:38:55.928359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.053 [2024-12-08 18:38:55.928551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.053 [2024-12-08 18:38:55.928692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.053 [2024-12-08 18:38:55.948368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.053 [2024-12-08 18:38:55.948537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.053 [2024-12-08 18:38:55.948646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.053 [2024-12-08 18:38:55.962292] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.053 [2024-12-08 18:38:55.962485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.053 [2024-12-08 18:38:55.962611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.312 [2024-12-08 18:38:55.976987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.312 [2024-12-08 18:38:55.977149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.312 [2024-12-08 18:38:55.977278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.312 [2024-12-08 18:38:55.991442] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.312 [2024-12-08 18:38:55.991601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.312 [2024-12-08 18:38:55.991732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.312 [2024-12-08 18:38:56.005973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.312 [2024-12-08 18:38:56.006135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.312 [2024-12-08 18:38:56.006263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.312 [2024-12-08 18:38:56.020317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.312 [2024-12-08 18:38:56.020491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.312 [2024-12-08 18:38:56.020616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.312 [2024-12-08 18:38:56.034135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.312 [2024-12-08 18:38:56.034286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.312 [2024-12-08 18:38:56.034467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.312 [2024-12-08 18:38:56.049578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.312 [2024-12-08 18:38:56.049732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.312 [2024-12-08 18:38:56.049870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.312 18091.00 IOPS, 70.67 MiB/s [2024-12-08T18:38:56.242Z] [2024-12-08 18:38:56.063512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.312 [2024-12-08 18:38:56.063662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.312 [2024-12-08 18:38:56.063858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.312 [2024-12-08 18:38:56.077504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.312 [2024-12-08 18:38:56.077655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.312 [2024-12-08 18:38:56.077806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.312 [2024-12-08 18:38:56.091590] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.312 [2024-12-08 18:38:56.091741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.312 [2024-12-08 18:38:56.091878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.312 [2024-12-08 18:38:56.105430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.312 [2024-12-08 18:38:56.105598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.312 [2024-12-08 18:38:56.105718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.312 [2024-12-08 18:38:56.119300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.312 [2024-12-08 18:38:56.119482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.312 [2024-12-08 18:38:56.119501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.312 [2024-12-08 18:38:56.133086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.312 [2024-12-08 18:38:56.133120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.312 [2024-12-08 18:38:56.133131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.312 [2024-12-08 18:38:56.146636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.312 [2024-12-08 18:38:56.146790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.312 [2024-12-08 18:38:56.146806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.312 [2024-12-08 18:38:56.160419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.313 [2024-12-08 18:38:56.160462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.313 [2024-12-08 18:38:56.160475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.313 [2024-12-08 18:38:56.173987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.313 [2024-12-08 18:38:56.174142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.313 [2024-12-08 18:38:56.174161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.313 [2024-12-08 18:38:56.187750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.313 [2024-12-08 18:38:56.187783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.313 [2024-12-08 18:38:56.187802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.313 [2024-12-08 18:38:56.201329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.313 [2024-12-08 18:38:56.201362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.313 [2024-12-08 18:38:56.201374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.313 [2024-12-08 18:38:56.214898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.313 [2024-12-08 18:38:56.214932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.313 [2024-12-08 18:38:56.214944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.313 [2024-12-08 18:38:56.228468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.313 [2024-12-08 18:38:56.228501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.313 [2024-12-08 18:38:56.228512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.571 [2024-12-08 18:38:56.242266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.571 [2024-12-08 18:38:56.242298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.571 [2024-12-08 18:38:56.242310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.571 [2024-12-08 18:38:56.255928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.571 [2024-12-08 18:38:56.255963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.571 [2024-12-08 18:38:56.255975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.571 [2024-12-08 18:38:56.269534] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.571 [2024-12-08 18:38:56.269566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.571 [2024-12-08 18:38:56.269577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.571 [2024-12-08 18:38:56.283068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.571 [2024-12-08 18:38:56.283209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.571 [2024-12-08 18:38:56.283226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.572 [2024-12-08 18:38:56.296795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.572 [2024-12-08 18:38:56.296829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.572 [2024-12-08 18:38:56.296841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.572 [2024-12-08 18:38:56.310369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.572 [2024-12-08 18:38:56.310540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.572 [2024-12-08 18:38:56.310558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.572 [2024-12-08 18:38:56.324163] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.572 [2024-12-08 18:38:56.324313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.572 [2024-12-08 18:38:56.324472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.572 [2024-12-08 18:38:56.338119] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.572 [2024-12-08 18:38:56.338268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.572 [2024-12-08 18:38:56.338395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.572 [2024-12-08 18:38:56.352143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.572 [2024-12-08 18:38:56.352292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.572 [2024-12-08 18:38:56.352452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.572 [2024-12-08 18:38:56.366092] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.572 [2024-12-08 18:38:56.366241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.572 [2024-12-08 18:38:56.366392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.572 [2024-12-08 18:38:56.380149] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.572 [2024-12-08 18:38:56.380299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.572 [2024-12-08 18:38:56.380464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.572 [2024-12-08 18:38:56.394125] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.572 [2024-12-08 18:38:56.394297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.572 [2024-12-08 18:38:56.394452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.572 [2024-12-08 18:38:56.408387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.572 [2024-12-08 18:38:56.408578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.572 [2024-12-08 18:38:56.408707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.572 [2024-12-08 18:38:56.422725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.572 [2024-12-08 18:38:56.422911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.572 [2024-12-08 18:38:56.423029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.572 [2024-12-08 18:38:56.436878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.572 [2024-12-08 18:38:56.437026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.572 [2024-12-08 18:38:56.437154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.572 [2024-12-08 18:38:56.450786] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.572 [2024-12-08 18:38:56.450936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.572 [2024-12-08 18:38:56.451056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.572 [2024-12-08 18:38:56.464797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.572 [2024-12-08 18:38:56.464965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.572 [2024-12-08 18:38:56.465093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.572 [2024-12-08 18:38:56.478724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.572 [2024-12-08 18:38:56.478875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.572 [2024-12-08 18:38:56.479023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.572 [2024-12-08 18:38:56.492785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.572 [2024-12-08 18:38:56.492935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.572 [2024-12-08 18:38:56.492953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.831 [2024-12-08 18:38:56.507032] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.831 [2024-12-08 18:38:56.507068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.831 [2024-12-08 18:38:56.507080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.831 [2024-12-08 18:38:56.520851] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.831 [2024-12-08 18:38:56.520883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.831 [2024-12-08 18:38:56.520895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.831 [2024-12-08 18:38:56.534483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.831 [2024-12-08 18:38:56.534516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.831 [2024-12-08 18:38:56.534527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.831 [2024-12-08 18:38:56.548060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.831 [2024-12-08 18:38:56.548272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.831 [2024-12-08 18:38:56.548291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.831 [2024-12-08 18:38:56.561976] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.831 [2024-12-08 18:38:56.562009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.831 [2024-12-08 18:38:56.562021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.831 [2024-12-08 18:38:56.575599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.831 [2024-12-08 18:38:56.575631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.831 [2024-12-08 18:38:56.575643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.831 [2024-12-08 18:38:56.589225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.831 [2024-12-08 18:38:56.589258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.831 [2024-12-08 18:38:56.589269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.831 [2024-12-08 18:38:56.602807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.831 [2024-12-08 18:38:56.602944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.831 [2024-12-08 18:38:56.602963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.831 [2024-12-08 18:38:56.616727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.831 [2024-12-08 18:38:56.616760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.831 [2024-12-08 18:38:56.616772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.831 [2024-12-08 18:38:56.630319] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.831 [2024-12-08 18:38:56.630483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.831 [2024-12-08 18:38:56.630499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.831 [2024-12-08 18:38:56.644106] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.831 [2024-12-08 18:38:56.644300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.832 [2024-12-08 18:38:56.644419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.832 [2024-12-08 18:38:56.657978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.832 [2024-12-08 18:38:56.658129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.832 [2024-12-08 18:38:56.658264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.832 [2024-12-08 18:38:56.671933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.832 [2024-12-08 18:38:56.672119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.832 [2024-12-08 18:38:56.672291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.832 [2024-12-08 18:38:56.686105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.832 [2024-12-08 18:38:56.686281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.832 [2024-12-08 18:38:56.686458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.832 [2024-12-08 18:38:56.700106] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.832 [2024-12-08 18:38:56.700291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.832 [2024-12-08 18:38:56.700418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.832 [2024-12-08 18:38:56.713914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.832 [2024-12-08 18:38:56.714099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.832 [2024-12-08 18:38:56.714249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.832 [2024-12-08 18:38:56.727987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.832 [2024-12-08 18:38:56.728189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.832 [2024-12-08 18:38:56.728338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.832 [2024-12-08 18:38:56.742049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.832 [2024-12-08 18:38:56.742199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.832 [2024-12-08 18:38:56.742350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.832 [2024-12-08 18:38:56.756445] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:38.832 [2024-12-08 18:38:56.756635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.832 [2024-12-08 18:38:56.756777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.091 [2024-12-08 18:38:56.770922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:39.091 [2024-12-08 18:38:56.771092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.091 [2024-12-08 18:38:56.771221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.091 [2024-12-08 18:38:56.784987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:39.091 [2024-12-08 18:38:56.785155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.091 [2024-12-08 18:38:56.785283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.091 [2024-12-08 18:38:56.798893] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:39.091 [2024-12-08 18:38:56.799061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.091 [2024-12-08 18:38:56.799174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.091 [2024-12-08 18:38:56.812830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:39.091 [2024-12-08 18:38:56.812977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.091 [2024-12-08 18:38:56.812993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.091 [2024-12-08 18:38:56.826650] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:39.091 [2024-12-08 18:38:56.826683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.091 [2024-12-08 18:38:56.826694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.091 [2024-12-08 18:38:56.846227] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:39.091 [2024-12-08 18:38:56.846381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.091 [2024-12-08 18:38:56.846397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.091 [2024-12-08 18:38:56.860326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:39.091 [2024-12-08 18:38:56.860361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.091 [2024-12-08 18:38:56.860380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.091 [2024-12-08 18:38:56.874694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:39.091 [2024-12-08 18:38:56.874727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.091 [2024-12-08 18:38:56.874738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.091 [2024-12-08 18:38:56.889386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:39.091 [2024-12-08 18:38:56.889426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.091 [2024-12-08 18:38:56.889437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.091 [2024-12-08 18:38:56.903691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:39.091 [2024-12-08 18:38:56.903881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.091 [2024-12-08 18:38:56.903901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.091 [2024-12-08 18:38:56.917581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:39.091 [2024-12-08 18:38:56.917615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.091 [2024-12-08 18:38:56.917634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.091 [2024-12-08 18:38:56.931165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:39.092 [2024-12-08 18:38:56.931301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.092 [2024-12-08 18:38:56.931316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.092 [2024-12-08 18:38:56.945216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:39.092 [2024-12-08 18:38:56.945251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.092 [2024-12-08 18:38:56.945263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.092 [2024-12-08 18:38:56.958761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:39.092 [2024-12-08 18:38:56.958794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.092 [2024-12-08 18:38:56.958806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.092 [2024-12-08 18:38:56.972309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:39.092 [2024-12-08 18:38:56.972466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.092 [2024-12-08 18:38:56.972486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.092 [2024-12-08 18:38:56.986076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:39.092 [2024-12-08 18:38:56.986109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.092 [2024-12-08 18:38:56.986121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.092 [2024-12-08 18:38:56.999651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:39.092 [2024-12-08 18:38:56.999683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.092 [2024-12-08 18:38:56.999695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.092 [2024-12-08 18:38:57.013325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:39.092 [2024-12-08 18:38:57.013358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.092 [2024-12-08 18:38:57.013369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.351 [2024-12-08 18:38:57.027768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:39.351 [2024-12-08 18:38:57.027962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.351 [2024-12-08 18:38:57.027978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.351 [2024-12-08 18:38:57.042533] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:39.351 [2024-12-08 18:38:57.042686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:47 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.351 [2024-12-08 18:38:57.042702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.351 18153.50 IOPS, 70.91 MiB/s [2024-12-08T18:38:57.281Z] [2024-12-08 18:38:57.058301] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f96510) 00:21:39.351 [2024-12-08 18:38:57.058449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.351 [2024-12-08 18:38:57.058477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.351 00:21:39.351 Latency(us) 00:21:39.351 [2024-12-08T18:38:57.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.351 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:39.351 nvme0n1 : 2.01 18181.93 71.02 0.00 0.00 7034.34 6613.18 26691.03 00:21:39.351 [2024-12-08T18:38:57.281Z] =================================================================================================================== 00:21:39.351 [2024-12-08T18:38:57.281Z] Total : 18181.93 71.02 0.00 0.00 7034.34 6613.18 26691.03 00:21:39.351 { 00:21:39.351 "results": [ 00:21:39.351 { 00:21:39.351 "job": "nvme0n1", 00:21:39.351 "core_mask": "0x2", 00:21:39.351 "workload": "randread", 00:21:39.351 "status": "finished", 00:21:39.351 "queue_depth": 128, 00:21:39.351 "io_size": 4096, 00:21:39.351 "runtime": 2.010898, 00:21:39.351 "iops": 18181.92668151244, 00:21:39.351 "mibps": 71.02315109965797, 00:21:39.351 "io_failed": 0, 00:21:39.351 "io_timeout": 0, 00:21:39.351 "avg_latency_us": 7034.335896086845, 00:21:39.351 "min_latency_us": 6613.178181818182, 00:21:39.351 "max_latency_us": 26691.025454545455 00:21:39.351 } 00:21:39.351 ], 00:21:39.351 "core_count": 1 00:21:39.351 } 00:21:39.351 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:39.351 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:39.351 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:39.351 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:39.351 | .driver_specific 00:21:39.351 | .nvme_error 00:21:39.351 | .status_code 00:21:39.351 | .command_transient_transport_error' 00:21:39.611 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:21:39.611 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94802 00:21:39.611 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 94802 ']' 00:21:39.611 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 94802 00:21:39.611 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:39.611 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:39.611 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94802 00:21:39.611 killing process with pid 94802 00:21:39.611 Received shutdown signal, test time was about 2.000000 seconds 00:21:39.611 00:21:39.611 Latency(us) 00:21:39.611 [2024-12-08T18:38:57.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.611 [2024-12-08T18:38:57.541Z] =================================================================================================================== 00:21:39.611 [2024-12-08T18:38:57.541Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:39.611 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:39.611 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:39.611 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94802' 00:21:39.611 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 94802 00:21:39.611 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 94802 00:21:39.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:39.871 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:21:39.871 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:39.871 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:39.871 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:39.871 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:39.871 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94849 00:21:39.871 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:21:39.871 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94849 /var/tmp/bperf.sock 00:21:39.871 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 94849 ']' 00:21:39.871 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:39.871 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:39.871 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:39.871 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:39.871 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:39.871 [2024-12-08 18:38:57.674001] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:39.871 [2024-12-08 18:38:57.674254] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:21:39.871 Zero copy mechanism will not be used. 00:21:39.871 llocations --file-prefix=spdk_pid94849 ] 00:21:40.131 [2024-12-08 18:38:57.805444] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.131 [2024-12-08 18:38:57.861076] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.131 [2024-12-08 18:38:57.930850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:40.131 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:40.131 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:40.131 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:40.131 18:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:40.390 18:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:40.390 18:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.390 18:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:40.390 18:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.390 18:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:40.390 18:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:40.959 nvme0n1 00:21:40.959 18:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:40.959 18:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.959 18:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:40.959 18:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.959 18:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:40.959 18:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:40.959 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:40.959 Zero copy mechanism will not be used. 00:21:40.959 Running I/O for 2 seconds... 00:21:40.959 [2024-12-08 18:38:58.757044] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.959 [2024-12-08 18:38:58.757104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.959 [2024-12-08 18:38:58.757119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.959 [2024-12-08 18:38:58.761424] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.959 [2024-12-08 18:38:58.761458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.959 [2024-12-08 18:38:58.761470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.959 [2024-12-08 18:38:58.765682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.959 [2024-12-08 18:38:58.765717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.959 [2024-12-08 18:38:58.765729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.959 [2024-12-08 18:38:58.770030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.959 [2024-12-08 18:38:58.770066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.959 [2024-12-08 18:38:58.770078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.959 [2024-12-08 18:38:58.774293] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.959 [2024-12-08 18:38:58.774329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.959 [2024-12-08 18:38:58.774342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.959 [2024-12-08 18:38:58.778489] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.959 [2024-12-08 18:38:58.778522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.960 [2024-12-08 18:38:58.778534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.960 [2024-12-08 18:38:58.782673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.960 [2024-12-08 18:38:58.782706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.960 [2024-12-08 18:38:58.782718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.960 [2024-12-08 18:38:58.786967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.960 [2024-12-08 18:38:58.787002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.960 [2024-12-08 18:38:58.787015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.960 [2024-12-08 18:38:58.791164] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.960 [2024-12-08 18:38:58.791199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.960 [2024-12-08 18:38:58.791212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.960 [2024-12-08 18:38:58.795492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.960 [2024-12-08 18:38:58.795526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.960 [2024-12-08 18:38:58.795546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.960 [2024-12-08 18:38:58.799710] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.960 [2024-12-08 18:38:58.799745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.960 [2024-12-08 18:38:58.799768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.960 [2024-12-08 18:38:58.804010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.960 [2024-12-08 18:38:58.804046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.960 [2024-12-08 18:38:58.804058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.960 [2024-12-08 18:38:58.808288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.960 [2024-12-08 18:38:58.808322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.960 [2024-12-08 18:38:58.808334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.960 [2024-12-08 18:38:58.812626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.960 [2024-12-08 18:38:58.812660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.960 [2024-12-08 18:38:58.812671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.960 [2024-12-08 18:38:58.816834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.960 [2024-12-08 18:38:58.816868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.960 [2024-12-08 18:38:58.816880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.960 [2024-12-08 18:38:58.821214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.960 [2024-12-08 18:38:58.821250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.960 [2024-12-08 18:38:58.821263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.960 [2024-12-08 18:38:58.825489] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.960 [2024-12-08 18:38:58.825522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.960 [2024-12-08 18:38:58.825534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.960 [2024-12-08 18:38:58.829701] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.960 [2024-12-08 18:38:58.829735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.960 [2024-12-08 18:38:58.829747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.960 [2024-12-08 18:38:58.833899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.960 [2024-12-08 18:38:58.833934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.960 [2024-12-08 18:38:58.833946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.960 [2024-12-08 18:38:58.838179] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.960 [2024-12-08 18:38:58.838212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.960 [2024-12-08 18:38:58.838224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.960 [2024-12-08 18:38:58.842444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.960 [2024-12-08 18:38:58.842476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.960 [2024-12-08 18:38:58.842488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.960 [2024-12-08 18:38:58.846699] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.960 [2024-12-08 18:38:58.846733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.960 [2024-12-08 18:38:58.846745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.960 [2024-12-08 18:38:58.851025] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.960 [2024-12-08 18:38:58.851059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.960 [2024-12-08 18:38:58.851071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.960 [2024-12-08 18:38:58.855254] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.960 [2024-12-08 18:38:58.855288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.960 [2024-12-08 18:38:58.855299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.960 [2024-12-08 18:38:58.859586] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.960 [2024-12-08 18:38:58.859620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.960 [2024-12-08 18:38:58.859640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.960 [2024-12-08 18:38:58.863871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.960 [2024-12-08 18:38:58.863906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.960 [2024-12-08 18:38:58.863918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.960 [2024-12-08 18:38:58.868073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.960 [2024-12-08 18:38:58.868107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.960 [2024-12-08 18:38:58.868120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.960 [2024-12-08 18:38:58.872370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.960 [2024-12-08 18:38:58.872414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.960 [2024-12-08 18:38:58.872438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.960 [2024-12-08 18:38:58.876670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.960 [2024-12-08 18:38:58.876703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.960 [2024-12-08 18:38:58.876714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.960 [2024-12-08 18:38:58.880958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.960 [2024-12-08 18:38:58.880992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.960 [2024-12-08 18:38:58.881004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.960 [2024-12-08 18:38:58.885479] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:40.960 [2024-12-08 18:38:58.885523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.960 [2024-12-08 18:38:58.885536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.222 [2024-12-08 18:38:58.889835] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.222 [2024-12-08 18:38:58.889869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.222 [2024-12-08 18:38:58.889881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.222 [2024-12-08 18:38:58.894256] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.222 [2024-12-08 18:38:58.894291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.222 [2024-12-08 18:38:58.894302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.222 [2024-12-08 18:38:58.898677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.222 [2024-12-08 18:38:58.898713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.222 [2024-12-08 18:38:58.898733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.222 [2024-12-08 18:38:58.903028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.222 [2024-12-08 18:38:58.903062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.222 [2024-12-08 18:38:58.903074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.222 [2024-12-08 18:38:58.907474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.222 [2024-12-08 18:38:58.907507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.222 [2024-12-08 18:38:58.907527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.222 [2024-12-08 18:38:58.911982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.222 [2024-12-08 18:38:58.912018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.222 [2024-12-08 18:38:58.912031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.222 [2024-12-08 18:38:58.916676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.222 [2024-12-08 18:38:58.916709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.222 [2024-12-08 18:38:58.916732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.222 [2024-12-08 18:38:58.921522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.222 [2024-12-08 18:38:58.921555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.222 [2024-12-08 18:38:58.921575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.222 [2024-12-08 18:38:58.926175] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.222 [2024-12-08 18:38:58.926210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.222 [2024-12-08 18:38:58.926230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.222 [2024-12-08 18:38:58.930795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.222 [2024-12-08 18:38:58.930842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.222 [2024-12-08 18:38:58.930854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.222 [2024-12-08 18:38:58.935398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.222 [2024-12-08 18:38:58.935443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.222 [2024-12-08 18:38:58.935463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.222 [2024-12-08 18:38:58.940025] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.222 [2024-12-08 18:38:58.940222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.222 [2024-12-08 18:38:58.940241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.222 [2024-12-08 18:38:58.944669] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.222 [2024-12-08 18:38:58.944704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.222 [2024-12-08 18:38:58.944723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.222 [2024-12-08 18:38:58.949153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.222 [2024-12-08 18:38:58.949188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.222 [2024-12-08 18:38:58.949200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.222 [2024-12-08 18:38:58.953460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.222 [2024-12-08 18:38:58.953493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.222 [2024-12-08 18:38:58.953505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.222 [2024-12-08 18:38:58.957679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.222 [2024-12-08 18:38:58.957713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.222 [2024-12-08 18:38:58.957724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.222 [2024-12-08 18:38:58.961902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.222 [2024-12-08 18:38:58.961936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.222 [2024-12-08 18:38:58.961955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.222 [2024-12-08 18:38:58.966241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.222 [2024-12-08 18:38:58.966275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.222 [2024-12-08 18:38:58.966288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.222 [2024-12-08 18:38:58.970581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.222 [2024-12-08 18:38:58.970615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.222 [2024-12-08 18:38:58.970627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.222 [2024-12-08 18:38:58.974717] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.222 [2024-12-08 18:38:58.974750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.222 [2024-12-08 18:38:58.974762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.222 [2024-12-08 18:38:58.978946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.222 [2024-12-08 18:38:58.978981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.222 [2024-12-08 18:38:58.978993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.222 [2024-12-08 18:38:58.983170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.222 [2024-12-08 18:38:58.983204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.222 [2024-12-08 18:38:58.983215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.222 [2024-12-08 18:38:58.987362] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.222 [2024-12-08 18:38:58.987396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.222 [2024-12-08 18:38:58.987435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.222 [2024-12-08 18:38:58.991636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.222 [2024-12-08 18:38:58.991669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.222 [2024-12-08 18:38:58.991681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.222 [2024-12-08 18:38:58.995972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.222 [2024-12-08 18:38:58.996008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.222 [2024-12-08 18:38:58.996022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.222 [2024-12-08 18:38:59.000204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.222 [2024-12-08 18:38:59.000239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.222 [2024-12-08 18:38:59.000250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.223 [2024-12-08 18:38:59.004508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.223 [2024-12-08 18:38:59.004541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.223 [2024-12-08 18:38:59.004553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.223 [2024-12-08 18:38:59.008743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.223 [2024-12-08 18:38:59.008776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.223 [2024-12-08 18:38:59.008788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.223 [2024-12-08 18:38:59.012961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.223 [2024-12-08 18:38:59.012996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.223 [2024-12-08 18:38:59.013008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.223 [2024-12-08 18:38:59.017188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.223 [2024-12-08 18:38:59.017222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.223 [2024-12-08 18:38:59.017234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.223 [2024-12-08 18:38:59.021536] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.223 [2024-12-08 18:38:59.021569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.223 [2024-12-08 18:38:59.021581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.223 [2024-12-08 18:38:59.025692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.223 [2024-12-08 18:38:59.025726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.223 [2024-12-08 18:38:59.025738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.223 [2024-12-08 18:38:59.029922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.223 [2024-12-08 18:38:59.029956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.223 [2024-12-08 18:38:59.029967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.223 [2024-12-08 18:38:59.034171] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.223 [2024-12-08 18:38:59.034205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.223 [2024-12-08 18:38:59.034217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.223 [2024-12-08 18:38:59.038427] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.223 [2024-12-08 18:38:59.038460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.223 [2024-12-08 18:38:59.038472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.223 [2024-12-08 18:38:59.042603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.223 [2024-12-08 18:38:59.042636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.223 [2024-12-08 18:38:59.042648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.223 [2024-12-08 18:38:59.046834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.223 [2024-12-08 18:38:59.046868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.223 [2024-12-08 18:38:59.046880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.223 [2024-12-08 18:38:59.051024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.223 [2024-12-08 18:38:59.051057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.223 [2024-12-08 18:38:59.051069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.223 [2024-12-08 18:38:59.055253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.223 [2024-12-08 18:38:59.055286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.223 [2024-12-08 18:38:59.055298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.223 [2024-12-08 18:38:59.059545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.223 [2024-12-08 18:38:59.059578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.223 [2024-12-08 18:38:59.059590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.223 [2024-12-08 18:38:59.063790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.223 [2024-12-08 18:38:59.063864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.223 [2024-12-08 18:38:59.063876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.223 [2024-12-08 18:38:59.068229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.223 [2024-12-08 18:38:59.068263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.223 [2024-12-08 18:38:59.068275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.223 [2024-12-08 18:38:59.072584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.223 [2024-12-08 18:38:59.072617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.223 [2024-12-08 18:38:59.072629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.223 [2024-12-08 18:38:59.076745] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.223 [2024-12-08 18:38:59.076779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.223 [2024-12-08 18:38:59.076790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.223 [2024-12-08 18:38:59.080978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.223 [2024-12-08 18:38:59.081013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.223 [2024-12-08 18:38:59.081025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.223 [2024-12-08 18:38:59.085161] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.223 [2024-12-08 18:38:59.085195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.223 [2024-12-08 18:38:59.085207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.223 [2024-12-08 18:38:59.089358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.223 [2024-12-08 18:38:59.089391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.223 [2024-12-08 18:38:59.089429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.223 [2024-12-08 18:38:59.093562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.223 [2024-12-08 18:38:59.093595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.223 [2024-12-08 18:38:59.093606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.223 [2024-12-08 18:38:59.097709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.223 [2024-12-08 18:38:59.097743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.223 [2024-12-08 18:38:59.097755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.223 [2024-12-08 18:38:59.101920] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.223 [2024-12-08 18:38:59.101954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.223 [2024-12-08 18:38:59.101965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.223 [2024-12-08 18:38:59.106161] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.223 [2024-12-08 18:38:59.106195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.223 [2024-12-08 18:38:59.106207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.223 [2024-12-08 18:38:59.110469] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.223 [2024-12-08 18:38:59.110503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.223 [2024-12-08 18:38:59.110515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.223 [2024-12-08 18:38:59.114707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.223 [2024-12-08 18:38:59.114739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.223 [2024-12-08 18:38:59.114751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.224 [2024-12-08 18:38:59.118944] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.224 [2024-12-08 18:38:59.118977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.224 [2024-12-08 18:38:59.118989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.224 [2024-12-08 18:38:59.123117] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.224 [2024-12-08 18:38:59.123151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.224 [2024-12-08 18:38:59.123163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.224 [2024-12-08 18:38:59.127451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.224 [2024-12-08 18:38:59.127483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.224 [2024-12-08 18:38:59.127495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.224 [2024-12-08 18:38:59.131635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.224 [2024-12-08 18:38:59.131669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.224 [2024-12-08 18:38:59.131680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.224 [2024-12-08 18:38:59.135751] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.224 [2024-12-08 18:38:59.135785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.224 [2024-12-08 18:38:59.135819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.224 [2024-12-08 18:38:59.139979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.224 [2024-12-08 18:38:59.140014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.224 [2024-12-08 18:38:59.140026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.224 [2024-12-08 18:38:59.144257] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.224 [2024-12-08 18:38:59.144292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.224 [2024-12-08 18:38:59.144305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.485 [2024-12-08 18:38:59.148761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.485 [2024-12-08 18:38:59.148797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.485 [2024-12-08 18:38:59.148809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.485 [2024-12-08 18:38:59.153172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.485 [2024-12-08 18:38:59.153205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.485 [2024-12-08 18:38:59.153225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.485 [2024-12-08 18:38:59.157604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.485 [2024-12-08 18:38:59.157637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.485 [2024-12-08 18:38:59.157649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.485 [2024-12-08 18:38:59.161732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.485 [2024-12-08 18:38:59.161765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.485 [2024-12-08 18:38:59.161777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.486 [2024-12-08 18:38:59.165969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.486 [2024-12-08 18:38:59.166003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.486 [2024-12-08 18:38:59.166015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.486 [2024-12-08 18:38:59.170244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.486 [2024-12-08 18:38:59.170279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.486 [2024-12-08 18:38:59.170290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.486 [2024-12-08 18:38:59.174511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.486 [2024-12-08 18:38:59.174542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.486 [2024-12-08 18:38:59.174553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.486 [2024-12-08 18:38:59.178715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.486 [2024-12-08 18:38:59.178748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.486 [2024-12-08 18:38:59.178759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.486 [2024-12-08 18:38:59.183028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.486 [2024-12-08 18:38:59.183062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.486 [2024-12-08 18:38:59.183074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.486 [2024-12-08 18:38:59.187345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.486 [2024-12-08 18:38:59.187379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.486 [2024-12-08 18:38:59.187398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.486 [2024-12-08 18:38:59.191852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.486 [2024-12-08 18:38:59.191888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.486 [2024-12-08 18:38:59.191899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.486 [2024-12-08 18:38:59.196785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.486 [2024-12-08 18:38:59.196819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.486 [2024-12-08 18:38:59.196830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.486 [2024-12-08 18:38:59.201763] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.486 [2024-12-08 18:38:59.201798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.486 [2024-12-08 18:38:59.201818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.486 [2024-12-08 18:38:59.206064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.486 [2024-12-08 18:38:59.206099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.486 [2024-12-08 18:38:59.206119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.486 [2024-12-08 18:38:59.210337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.486 [2024-12-08 18:38:59.210371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.486 [2024-12-08 18:38:59.210382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.486 [2024-12-08 18:38:59.214577] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.486 [2024-12-08 18:38:59.214612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.486 [2024-12-08 18:38:59.214623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.486 [2024-12-08 18:38:59.218723] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.486 [2024-12-08 18:38:59.218756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.486 [2024-12-08 18:38:59.218767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.486 [2024-12-08 18:38:59.222860] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.486 [2024-12-08 18:38:59.222894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.486 [2024-12-08 18:38:59.222905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.486 [2024-12-08 18:38:59.227036] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.486 [2024-12-08 18:38:59.227070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.486 [2024-12-08 18:38:59.227082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.486 [2024-12-08 18:38:59.231331] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.486 [2024-12-08 18:38:59.231365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.486 [2024-12-08 18:38:59.231376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.486 [2024-12-08 18:38:59.235540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.486 [2024-12-08 18:38:59.235574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.486 [2024-12-08 18:38:59.235585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.486 [2024-12-08 18:38:59.239706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.486 [2024-12-08 18:38:59.239739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.486 [2024-12-08 18:38:59.239750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.486 [2024-12-08 18:38:59.243897] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.486 [2024-12-08 18:38:59.243933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.486 [2024-12-08 18:38:59.243944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.486 [2024-12-08 18:38:59.248135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.486 [2024-12-08 18:38:59.248185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.486 [2024-12-08 18:38:59.248197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.486 [2024-12-08 18:38:59.252515] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.486 [2024-12-08 18:38:59.252549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.486 [2024-12-08 18:38:59.252560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.486 [2024-12-08 18:38:59.256716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.486 [2024-12-08 18:38:59.256749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.486 [2024-12-08 18:38:59.256761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.486 [2024-12-08 18:38:59.260892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.486 [2024-12-08 18:38:59.260925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.486 [2024-12-08 18:38:59.260936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.486 [2024-12-08 18:38:59.265079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.486 [2024-12-08 18:38:59.265114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.486 [2024-12-08 18:38:59.265125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.486 [2024-12-08 18:38:59.269293] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.486 [2024-12-08 18:38:59.269326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.487 [2024-12-08 18:38:59.269337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.487 [2024-12-08 18:38:59.273523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.487 [2024-12-08 18:38:59.273558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.487 [2024-12-08 18:38:59.273569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.487 [2024-12-08 18:38:59.277698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.487 [2024-12-08 18:38:59.277732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.487 [2024-12-08 18:38:59.277743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.487 [2024-12-08 18:38:59.281975] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.487 [2024-12-08 18:38:59.282009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.487 [2024-12-08 18:38:59.282019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.487 [2024-12-08 18:38:59.286251] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.487 [2024-12-08 18:38:59.286286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.487 [2024-12-08 18:38:59.286297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.487 [2024-12-08 18:38:59.290572] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.487 [2024-12-08 18:38:59.290606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.487 [2024-12-08 18:38:59.290617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.487 [2024-12-08 18:38:59.294782] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.487 [2024-12-08 18:38:59.294815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.487 [2024-12-08 18:38:59.294827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.487 [2024-12-08 18:38:59.298995] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.487 [2024-12-08 18:38:59.299032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.487 [2024-12-08 18:38:59.299043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.487 [2024-12-08 18:38:59.303146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.487 [2024-12-08 18:38:59.303179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.487 [2024-12-08 18:38:59.303190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.487 [2024-12-08 18:38:59.307319] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.487 [2024-12-08 18:38:59.307356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.487 [2024-12-08 18:38:59.307366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.487 [2024-12-08 18:38:59.311532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.487 [2024-12-08 18:38:59.311568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.487 [2024-12-08 18:38:59.311579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.487 [2024-12-08 18:38:59.315645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.487 [2024-12-08 18:38:59.315679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.487 [2024-12-08 18:38:59.315690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.487 [2024-12-08 18:38:59.319754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.487 [2024-12-08 18:38:59.319787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.487 [2024-12-08 18:38:59.319821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.487 [2024-12-08 18:38:59.323939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.487 [2024-12-08 18:38:59.323974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.487 [2024-12-08 18:38:59.323986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.487 [2024-12-08 18:38:59.328150] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.487 [2024-12-08 18:38:59.328199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.487 [2024-12-08 18:38:59.328224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.487 [2024-12-08 18:38:59.332631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.487 [2024-12-08 18:38:59.332665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.487 [2024-12-08 18:38:59.332676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.487 [2024-12-08 18:38:59.336869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.487 [2024-12-08 18:38:59.336903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.487 [2024-12-08 18:38:59.336915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.487 [2024-12-08 18:38:59.341236] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.487 [2024-12-08 18:38:59.341271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.487 [2024-12-08 18:38:59.341283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.487 [2024-12-08 18:38:59.345503] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.487 [2024-12-08 18:38:59.345536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.487 [2024-12-08 18:38:59.345548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.487 [2024-12-08 18:38:59.349667] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.487 [2024-12-08 18:38:59.349701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.487 [2024-12-08 18:38:59.349713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.487 [2024-12-08 18:38:59.353867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.487 [2024-12-08 18:38:59.353901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.487 [2024-12-08 18:38:59.353912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.487 [2024-12-08 18:38:59.358143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.487 [2024-12-08 18:38:59.358177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.487 [2024-12-08 18:38:59.358188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.487 [2024-12-08 18:38:59.362423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.487 [2024-12-08 18:38:59.362458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.487 [2024-12-08 18:38:59.362469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.487 [2024-12-08 18:38:59.366699] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.487 [2024-12-08 18:38:59.366749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.487 [2024-12-08 18:38:59.366761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.487 [2024-12-08 18:38:59.370936] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.487 [2024-12-08 18:38:59.370970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.487 [2024-12-08 18:38:59.370981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.487 [2024-12-08 18:38:59.375187] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.487 [2024-12-08 18:38:59.375221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.487 [2024-12-08 18:38:59.375232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.487 [2024-12-08 18:38:59.379449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.487 [2024-12-08 18:38:59.379485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.487 [2024-12-08 18:38:59.379497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.487 [2024-12-08 18:38:59.383697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.488 [2024-12-08 18:38:59.383730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.488 [2024-12-08 18:38:59.383741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.488 [2024-12-08 18:38:59.387903] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.488 [2024-12-08 18:38:59.387937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.488 [2024-12-08 18:38:59.387949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.488 [2024-12-08 18:38:59.392028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.488 [2024-12-08 18:38:59.392063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.488 [2024-12-08 18:38:59.392084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.488 [2024-12-08 18:38:59.396236] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.488 [2024-12-08 18:38:59.396269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.488 [2024-12-08 18:38:59.396280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.488 [2024-12-08 18:38:59.400570] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.488 [2024-12-08 18:38:59.400602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.488 [2024-12-08 18:38:59.400612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.488 [2024-12-08 18:38:59.404833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.488 [2024-12-08 18:38:59.404866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.488 [2024-12-08 18:38:59.404877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.488 [2024-12-08 18:38:59.409316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.488 [2024-12-08 18:38:59.409349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.488 [2024-12-08 18:38:59.409360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.749 [2024-12-08 18:38:59.414291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.749 [2024-12-08 18:38:59.414341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.749 [2024-12-08 18:38:59.414352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.749 [2024-12-08 18:38:59.418858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.749 [2024-12-08 18:38:59.418892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.749 [2024-12-08 18:38:59.418912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.749 [2024-12-08 18:38:59.423647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.750 [2024-12-08 18:38:59.423679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.750 [2024-12-08 18:38:59.423690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.750 [2024-12-08 18:38:59.428266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.750 [2024-12-08 18:38:59.428298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.750 [2024-12-08 18:38:59.428309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.750 [2024-12-08 18:38:59.433133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.750 [2024-12-08 18:38:59.433166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.750 [2024-12-08 18:38:59.433177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.750 [2024-12-08 18:38:59.437785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.750 [2024-12-08 18:38:59.437821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.750 [2024-12-08 18:38:59.437838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.750 [2024-12-08 18:38:59.442394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.750 [2024-12-08 18:38:59.442448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.750 [2024-12-08 18:38:59.442460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.750 [2024-12-08 18:38:59.446935] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.750 [2024-12-08 18:38:59.446968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.750 [2024-12-08 18:38:59.446979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.750 [2024-12-08 18:38:59.451353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.750 [2024-12-08 18:38:59.451385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.750 [2024-12-08 18:38:59.451396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.750 [2024-12-08 18:38:59.455714] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.750 [2024-12-08 18:38:59.455747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.750 [2024-12-08 18:38:59.455758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.750 [2024-12-08 18:38:59.460062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.750 [2024-12-08 18:38:59.460097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.750 [2024-12-08 18:38:59.460110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.750 [2024-12-08 18:38:59.464474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.750 [2024-12-08 18:38:59.464506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.750 [2024-12-08 18:38:59.464517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.750 [2024-12-08 18:38:59.468735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.750 [2024-12-08 18:38:59.468767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.750 [2024-12-08 18:38:59.468777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.750 [2024-12-08 18:38:59.473030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.750 [2024-12-08 18:38:59.473062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.750 [2024-12-08 18:38:59.473074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.750 [2024-12-08 18:38:59.477272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.750 [2024-12-08 18:38:59.477305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.750 [2024-12-08 18:38:59.477316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.750 [2024-12-08 18:38:59.481662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.750 [2024-12-08 18:38:59.481694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.750 [2024-12-08 18:38:59.481706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.750 [2024-12-08 18:38:59.485922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.750 [2024-12-08 18:38:59.485954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.750 [2024-12-08 18:38:59.485965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.750 [2024-12-08 18:38:59.490206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.750 [2024-12-08 18:38:59.490240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.750 [2024-12-08 18:38:59.490251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.750 [2024-12-08 18:38:59.494460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.750 [2024-12-08 18:38:59.494492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.750 [2024-12-08 18:38:59.494503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.750 [2024-12-08 18:38:59.498680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.750 [2024-12-08 18:38:59.498714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.750 [2024-12-08 18:38:59.498724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.750 [2024-12-08 18:38:59.503100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.750 [2024-12-08 18:38:59.503134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.750 [2024-12-08 18:38:59.503145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.750 [2024-12-08 18:38:59.507374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.750 [2024-12-08 18:38:59.507418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.750 [2024-12-08 18:38:59.507430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.750 [2024-12-08 18:38:59.511620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.750 [2024-12-08 18:38:59.511653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.750 [2024-12-08 18:38:59.511663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.750 [2024-12-08 18:38:59.515735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.750 [2024-12-08 18:38:59.515768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.750 [2024-12-08 18:38:59.515778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.750 [2024-12-08 18:38:59.519884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.750 [2024-12-08 18:38:59.519917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.750 [2024-12-08 18:38:59.519930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.750 [2024-12-08 18:38:59.524224] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.750 [2024-12-08 18:38:59.524256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.750 [2024-12-08 18:38:59.524267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.750 [2024-12-08 18:38:59.528580] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.750 [2024-12-08 18:38:59.528612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.750 [2024-12-08 18:38:59.528623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.750 [2024-12-08 18:38:59.532784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.750 [2024-12-08 18:38:59.532816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.750 [2024-12-08 18:38:59.532828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.750 [2024-12-08 18:38:59.537021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.750 [2024-12-08 18:38:59.537054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.750 [2024-12-08 18:38:59.537065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.751 [2024-12-08 18:38:59.541353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.751 [2024-12-08 18:38:59.541385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.751 [2024-12-08 18:38:59.541398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.751 [2024-12-08 18:38:59.545732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.751 [2024-12-08 18:38:59.545765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.751 [2024-12-08 18:38:59.545775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.751 [2024-12-08 18:38:59.549955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.751 [2024-12-08 18:38:59.549988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.751 [2024-12-08 18:38:59.549999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.751 [2024-12-08 18:38:59.554125] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.751 [2024-12-08 18:38:59.554157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.751 [2024-12-08 18:38:59.554169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.751 [2024-12-08 18:38:59.558594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.751 [2024-12-08 18:38:59.558631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.751 [2024-12-08 18:38:59.558642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.751 [2024-12-08 18:38:59.562834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.751 [2024-12-08 18:38:59.562867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.751 [2024-12-08 18:38:59.562878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.751 [2024-12-08 18:38:59.567181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.751 [2024-12-08 18:38:59.567230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.751 [2024-12-08 18:38:59.567241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.751 [2024-12-08 18:38:59.571487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.751 [2024-12-08 18:38:59.571519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.751 [2024-12-08 18:38:59.571530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.751 [2024-12-08 18:38:59.575706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.751 [2024-12-08 18:38:59.575738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.751 [2024-12-08 18:38:59.575749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.751 [2024-12-08 18:38:59.579896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.751 [2024-12-08 18:38:59.579930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.751 [2024-12-08 18:38:59.579942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.751 [2024-12-08 18:38:59.584084] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.751 [2024-12-08 18:38:59.584118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.751 [2024-12-08 18:38:59.584146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.751 [2024-12-08 18:38:59.588488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.751 [2024-12-08 18:38:59.588519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.751 [2024-12-08 18:38:59.588531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.751 [2024-12-08 18:38:59.592714] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.751 [2024-12-08 18:38:59.592746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.751 [2024-12-08 18:38:59.592757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.751 [2024-12-08 18:38:59.597009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.751 [2024-12-08 18:38:59.597042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.751 [2024-12-08 18:38:59.597053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.751 [2024-12-08 18:38:59.601290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.751 [2024-12-08 18:38:59.601324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.751 [2024-12-08 18:38:59.601335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.751 [2024-12-08 18:38:59.605635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.751 [2024-12-08 18:38:59.605667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.751 [2024-12-08 18:38:59.605678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.751 [2024-12-08 18:38:59.610113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.751 [2024-12-08 18:38:59.610146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.751 [2024-12-08 18:38:59.610156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.751 [2024-12-08 18:38:59.614391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.751 [2024-12-08 18:38:59.614433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.751 [2024-12-08 18:38:59.614444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.751 [2024-12-08 18:38:59.618535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.751 [2024-12-08 18:38:59.618567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.751 [2024-12-08 18:38:59.618578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.751 [2024-12-08 18:38:59.622691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.751 [2024-12-08 18:38:59.622723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.751 [2024-12-08 18:38:59.622734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.751 [2024-12-08 18:38:59.626798] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.751 [2024-12-08 18:38:59.626828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.751 [2024-12-08 18:38:59.626838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.751 [2024-12-08 18:38:59.631237] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.751 [2024-12-08 18:38:59.631271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.751 [2024-12-08 18:38:59.631282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.751 [2024-12-08 18:38:59.635454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.751 [2024-12-08 18:38:59.635483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.751 [2024-12-08 18:38:59.635494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.751 [2024-12-08 18:38:59.639655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.751 [2024-12-08 18:38:59.639700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.751 [2024-12-08 18:38:59.639711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.751 [2024-12-08 18:38:59.643863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.751 [2024-12-08 18:38:59.643908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.751 [2024-12-08 18:38:59.643921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.751 [2024-12-08 18:38:59.648134] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.751 [2024-12-08 18:38:59.648176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.751 [2024-12-08 18:38:59.648187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.751 [2024-12-08 18:38:59.652523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.751 [2024-12-08 18:38:59.652554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.751 [2024-12-08 18:38:59.652565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.752 [2024-12-08 18:38:59.656657] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.752 [2024-12-08 18:38:59.656687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.752 [2024-12-08 18:38:59.656697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.752 [2024-12-08 18:38:59.660779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.752 [2024-12-08 18:38:59.660808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.752 [2024-12-08 18:38:59.660819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.752 [2024-12-08 18:38:59.664875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.752 [2024-12-08 18:38:59.664904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.752 [2024-12-08 18:38:59.664914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.752 [2024-12-08 18:38:59.668958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.752 [2024-12-08 18:38:59.668987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.752 [2024-12-08 18:38:59.668997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.752 [2024-12-08 18:38:59.673208] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:41.752 [2024-12-08 18:38:59.673253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.752 [2024-12-08 18:38:59.673277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.013 [2024-12-08 18:38:59.677756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.013 [2024-12-08 18:38:59.677788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.013 [2024-12-08 18:38:59.677799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.013 [2024-12-08 18:38:59.681884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.013 [2024-12-08 18:38:59.681913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.013 [2024-12-08 18:38:59.681923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.013 [2024-12-08 18:38:59.686236] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.013 [2024-12-08 18:38:59.686266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.013 [2024-12-08 18:38:59.686277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.013 [2024-12-08 18:38:59.690441] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.013 [2024-12-08 18:38:59.690470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.013 [2024-12-08 18:38:59.690481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.013 [2024-12-08 18:38:59.694671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.013 [2024-12-08 18:38:59.694701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.013 [2024-12-08 18:38:59.694711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.013 [2024-12-08 18:38:59.698812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.013 [2024-12-08 18:38:59.698842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.013 [2024-12-08 18:38:59.698852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.014 [2024-12-08 18:38:59.702911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.014 [2024-12-08 18:38:59.702941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.014 [2024-12-08 18:38:59.702952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.014 [2024-12-08 18:38:59.707015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.014 [2024-12-08 18:38:59.707045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.014 [2024-12-08 18:38:59.707055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.014 [2024-12-08 18:38:59.711204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.014 [2024-12-08 18:38:59.711234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.014 [2024-12-08 18:38:59.711244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.014 [2024-12-08 18:38:59.715282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.014 [2024-12-08 18:38:59.715328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.014 [2024-12-08 18:38:59.715338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.014 [2024-12-08 18:38:59.719391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.014 [2024-12-08 18:38:59.719433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.014 [2024-12-08 18:38:59.719444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.014 [2024-12-08 18:38:59.723474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.014 [2024-12-08 18:38:59.723502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.014 [2024-12-08 18:38:59.723513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.014 [2024-12-08 18:38:59.727556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.014 [2024-12-08 18:38:59.727589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.014 [2024-12-08 18:38:59.727600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.014 [2024-12-08 18:38:59.731699] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.014 [2024-12-08 18:38:59.731729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.014 [2024-12-08 18:38:59.731739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.014 [2024-12-08 18:38:59.735823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.014 [2024-12-08 18:38:59.735870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.014 [2024-12-08 18:38:59.735881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.014 [2024-12-08 18:38:59.739884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.014 [2024-12-08 18:38:59.739914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.014 [2024-12-08 18:38:59.739925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.014 [2024-12-08 18:38:59.744024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.014 [2024-12-08 18:38:59.744055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.014 [2024-12-08 18:38:59.744066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.014 7161.00 IOPS, 895.12 MiB/s [2024-12-08T18:38:59.944Z] [2024-12-08 18:38:59.749534] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.014 [2024-12-08 18:38:59.749564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.014 [2024-12-08 18:38:59.749574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.014 [2024-12-08 18:38:59.753624] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.014 [2024-12-08 18:38:59.753653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.014 [2024-12-08 18:38:59.753663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.014 [2024-12-08 18:38:59.757724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.014 [2024-12-08 18:38:59.757754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.014 [2024-12-08 18:38:59.757764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.014 [2024-12-08 18:38:59.761879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.014 [2024-12-08 18:38:59.761909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.014 [2024-12-08 18:38:59.761919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.014 [2024-12-08 18:38:59.766039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.014 [2024-12-08 18:38:59.766069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.014 [2024-12-08 18:38:59.766080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.014 [2024-12-08 18:38:59.770198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.014 [2024-12-08 18:38:59.770227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.014 [2024-12-08 18:38:59.770237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.014 [2024-12-08 18:38:59.774346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.014 [2024-12-08 18:38:59.774376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.014 [2024-12-08 18:38:59.774387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.014 [2024-12-08 18:38:59.778375] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.014 [2024-12-08 18:38:59.778415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.014 [2024-12-08 18:38:59.778428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.014 [2024-12-08 18:38:59.782548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.014 [2024-12-08 18:38:59.782578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.014 [2024-12-08 18:38:59.782588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.014 [2024-12-08 18:38:59.786682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.014 [2024-12-08 18:38:59.786712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.014 [2024-12-08 18:38:59.786722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.014 [2024-12-08 18:38:59.790798] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.014 [2024-12-08 18:38:59.790828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.014 [2024-12-08 18:38:59.790838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.014 [2024-12-08 18:38:59.794915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.014 [2024-12-08 18:38:59.794945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.014 [2024-12-08 18:38:59.794955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.014 [2024-12-08 18:38:59.799036] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.014 [2024-12-08 18:38:59.799066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.014 [2024-12-08 18:38:59.799076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.014 [2024-12-08 18:38:59.803168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.014 [2024-12-08 18:38:59.803198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.014 [2024-12-08 18:38:59.803209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.014 [2024-12-08 18:38:59.807332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.014 [2024-12-08 18:38:59.807362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.014 [2024-12-08 18:38:59.807374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.015 [2024-12-08 18:38:59.811420] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.015 [2024-12-08 18:38:59.811449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.015 [2024-12-08 18:38:59.811459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.015 [2024-12-08 18:38:59.815584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.015 [2024-12-08 18:38:59.815616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.015 [2024-12-08 18:38:59.815627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.015 [2024-12-08 18:38:59.819666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.015 [2024-12-08 18:38:59.819696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.015 [2024-12-08 18:38:59.819706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.015 [2024-12-08 18:38:59.823739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.015 [2024-12-08 18:38:59.823769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.015 [2024-12-08 18:38:59.823779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.015 [2024-12-08 18:38:59.827882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.015 [2024-12-08 18:38:59.827914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.015 [2024-12-08 18:38:59.827926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.015 [2024-12-08 18:38:59.832034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.015 [2024-12-08 18:38:59.832066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.015 [2024-12-08 18:38:59.832078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.015 [2024-12-08 18:38:59.836151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.015 [2024-12-08 18:38:59.836190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.015 [2024-12-08 18:38:59.836201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.015 [2024-12-08 18:38:59.840349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.015 [2024-12-08 18:38:59.840379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.015 [2024-12-08 18:38:59.840389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.015 [2024-12-08 18:38:59.844679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.015 [2024-12-08 18:38:59.844709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.015 [2024-12-08 18:38:59.844720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.015 [2024-12-08 18:38:59.848819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.015 [2024-12-08 18:38:59.848848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.015 [2024-12-08 18:38:59.848859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.015 [2024-12-08 18:38:59.853107] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.015 [2024-12-08 18:38:59.853137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.015 [2024-12-08 18:38:59.853147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.015 [2024-12-08 18:38:59.857252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.015 [2024-12-08 18:38:59.857282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.015 [2024-12-08 18:38:59.857293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.015 [2024-12-08 18:38:59.861318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.015 [2024-12-08 18:38:59.861348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.015 [2024-12-08 18:38:59.861358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.015 [2024-12-08 18:38:59.865454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.015 [2024-12-08 18:38:59.865483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.015 [2024-12-08 18:38:59.865493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.015 [2024-12-08 18:38:59.869556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.015 [2024-12-08 18:38:59.869585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.015 [2024-12-08 18:38:59.869594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.015 [2024-12-08 18:38:59.873623] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.015 [2024-12-08 18:38:59.873653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.015 [2024-12-08 18:38:59.873664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.015 [2024-12-08 18:38:59.877720] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.015 [2024-12-08 18:38:59.877749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.015 [2024-12-08 18:38:59.877759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.015 [2024-12-08 18:38:59.881827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.015 [2024-12-08 18:38:59.881856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.015 [2024-12-08 18:38:59.881866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.015 [2024-12-08 18:38:59.885920] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.015 [2024-12-08 18:38:59.885950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.015 [2024-12-08 18:38:59.885961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.015 [2024-12-08 18:38:59.890098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.015 [2024-12-08 18:38:59.890127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.015 [2024-12-08 18:38:59.890138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.015 [2024-12-08 18:38:59.894224] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.015 [2024-12-08 18:38:59.894254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.015 [2024-12-08 18:38:59.894264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.015 [2024-12-08 18:38:59.898359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.015 [2024-12-08 18:38:59.898389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.015 [2024-12-08 18:38:59.898411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.015 [2024-12-08 18:38:59.902470] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.015 [2024-12-08 18:38:59.902498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.015 [2024-12-08 18:38:59.902509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.015 [2024-12-08 18:38:59.906656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.015 [2024-12-08 18:38:59.906684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.015 [2024-12-08 18:38:59.906694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.015 [2024-12-08 18:38:59.910797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.015 [2024-12-08 18:38:59.910826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.015 [2024-12-08 18:38:59.910837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.015 [2024-12-08 18:38:59.914979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.015 [2024-12-08 18:38:59.915009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.015 [2024-12-08 18:38:59.915019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.015 [2024-12-08 18:38:59.919096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.016 [2024-12-08 18:38:59.919126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.016 [2024-12-08 18:38:59.919137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.016 [2024-12-08 18:38:59.923225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.016 [2024-12-08 18:38:59.923256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.016 [2024-12-08 18:38:59.923266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.016 [2024-12-08 18:38:59.927605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.016 [2024-12-08 18:38:59.927635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.016 [2024-12-08 18:38:59.927645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.016 [2024-12-08 18:38:59.931892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.016 [2024-12-08 18:38:59.931923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.016 [2024-12-08 18:38:59.931934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.016 [2024-12-08 18:38:59.936621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.016 [2024-12-08 18:38:59.936650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.016 [2024-12-08 18:38:59.936661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.277 [2024-12-08 18:38:59.941629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.277 [2024-12-08 18:38:59.941660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.277 [2024-12-08 18:38:59.941670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.277 [2024-12-08 18:38:59.946631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.277 [2024-12-08 18:38:59.946661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.277 [2024-12-08 18:38:59.946671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.277 [2024-12-08 18:38:59.951331] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.277 [2024-12-08 18:38:59.951361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.277 [2024-12-08 18:38:59.951372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.277 [2024-12-08 18:38:59.955867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.277 [2024-12-08 18:38:59.955910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.277 [2024-12-08 18:38:59.955923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.277 [2024-12-08 18:38:59.960313] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.277 [2024-12-08 18:38:59.960342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.277 [2024-12-08 18:38:59.960353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.277 [2024-12-08 18:38:59.964637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.277 [2024-12-08 18:38:59.964667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.277 [2024-12-08 18:38:59.964677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.277 [2024-12-08 18:38:59.968993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.277 [2024-12-08 18:38:59.969023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.277 [2024-12-08 18:38:59.969034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.277 [2024-12-08 18:38:59.973343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.277 [2024-12-08 18:38:59.973372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.277 [2024-12-08 18:38:59.973382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.277 [2024-12-08 18:38:59.977710] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.277 [2024-12-08 18:38:59.977742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.277 [2024-12-08 18:38:59.977752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.277 [2024-12-08 18:38:59.981884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.277 [2024-12-08 18:38:59.981916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.277 [2024-12-08 18:38:59.981927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.277 [2024-12-08 18:38:59.986056] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.277 [2024-12-08 18:38:59.986090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.277 [2024-12-08 18:38:59.986100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.277 [2024-12-08 18:38:59.990204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.277 [2024-12-08 18:38:59.990236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.277 [2024-12-08 18:38:59.990247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.277 [2024-12-08 18:38:59.994412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.277 [2024-12-08 18:38:59.994440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.277 [2024-12-08 18:38:59.994450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.277 [2024-12-08 18:38:59.998501] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.277 [2024-12-08 18:38:59.998529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.277 [2024-12-08 18:38:59.998540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.277 [2024-12-08 18:39:00.002608] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.277 [2024-12-08 18:39:00.002638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.277 [2024-12-08 18:39:00.002648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.277 [2024-12-08 18:39:00.006750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.277 [2024-12-08 18:39:00.006780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.277 [2024-12-08 18:39:00.006791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.277 [2024-12-08 18:39:00.010899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.278 [2024-12-08 18:39:00.010930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.278 [2024-12-08 18:39:00.010940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.278 [2024-12-08 18:39:00.015082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.278 [2024-12-08 18:39:00.015113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.278 [2024-12-08 18:39:00.015123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.278 [2024-12-08 18:39:00.019277] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.278 [2024-12-08 18:39:00.019310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.278 [2024-12-08 18:39:00.019320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.278 [2024-12-08 18:39:00.023387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.278 [2024-12-08 18:39:00.023427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.278 [2024-12-08 18:39:00.023437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.278 [2024-12-08 18:39:00.027478] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.278 [2024-12-08 18:39:00.027510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.278 [2024-12-08 18:39:00.027521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.278 [2024-12-08 18:39:00.031584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.278 [2024-12-08 18:39:00.031614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.278 [2024-12-08 18:39:00.031625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.278 [2024-12-08 18:39:00.035702] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.278 [2024-12-08 18:39:00.035735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.278 [2024-12-08 18:39:00.035745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.278 [2024-12-08 18:39:00.039723] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.278 [2024-12-08 18:39:00.039753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.278 [2024-12-08 18:39:00.039763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.278 [2024-12-08 18:39:00.043842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.278 [2024-12-08 18:39:00.043873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.278 [2024-12-08 18:39:00.043885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.278 [2024-12-08 18:39:00.047934] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.278 [2024-12-08 18:39:00.047965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.278 [2024-12-08 18:39:00.047976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.278 [2024-12-08 18:39:00.052078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.278 [2024-12-08 18:39:00.052109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.278 [2024-12-08 18:39:00.052120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.278 [2024-12-08 18:39:00.056205] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.278 [2024-12-08 18:39:00.056234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.278 [2024-12-08 18:39:00.056245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.278 [2024-12-08 18:39:00.060326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.278 [2024-12-08 18:39:00.060358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.278 [2024-12-08 18:39:00.060368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.278 [2024-12-08 18:39:00.064441] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.278 [2024-12-08 18:39:00.064468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.278 [2024-12-08 18:39:00.064478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.278 [2024-12-08 18:39:00.068613] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.278 [2024-12-08 18:39:00.068642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.278 [2024-12-08 18:39:00.068653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.278 [2024-12-08 18:39:00.072766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.278 [2024-12-08 18:39:00.072795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.278 [2024-12-08 18:39:00.072805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.278 [2024-12-08 18:39:00.076863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.278 [2024-12-08 18:39:00.076892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.278 [2024-12-08 18:39:00.076902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.278 [2024-12-08 18:39:00.081014] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.278 [2024-12-08 18:39:00.081044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.278 [2024-12-08 18:39:00.081054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.278 [2024-12-08 18:39:00.085171] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.278 [2024-12-08 18:39:00.085201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.278 [2024-12-08 18:39:00.085211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.278 [2024-12-08 18:39:00.089311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.278 [2024-12-08 18:39:00.089341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.278 [2024-12-08 18:39:00.089352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.278 [2024-12-08 18:39:00.093459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.278 [2024-12-08 18:39:00.093489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.278 [2024-12-08 18:39:00.093499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.278 [2024-12-08 18:39:00.097557] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.278 [2024-12-08 18:39:00.097586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.278 [2024-12-08 18:39:00.097597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.278 [2024-12-08 18:39:00.101657] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.278 [2024-12-08 18:39:00.101687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.278 [2024-12-08 18:39:00.101697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.278 [2024-12-08 18:39:00.105762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.278 [2024-12-08 18:39:00.105792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.278 [2024-12-08 18:39:00.105803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.278 [2024-12-08 18:39:00.109851] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.278 [2024-12-08 18:39:00.109882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.278 [2024-12-08 18:39:00.109892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.278 [2024-12-08 18:39:00.113967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.278 [2024-12-08 18:39:00.113997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.278 [2024-12-08 18:39:00.114008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.278 [2024-12-08 18:39:00.118118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.278 [2024-12-08 18:39:00.118148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.278 [2024-12-08 18:39:00.118158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.278 [2024-12-08 18:39:00.122266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.279 [2024-12-08 18:39:00.122296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.279 [2024-12-08 18:39:00.122308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.279 [2024-12-08 18:39:00.126418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.279 [2024-12-08 18:39:00.126446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.279 [2024-12-08 18:39:00.126457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.279 [2024-12-08 18:39:00.130614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.279 [2024-12-08 18:39:00.130644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.279 [2024-12-08 18:39:00.130655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.279 [2024-12-08 18:39:00.134730] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.279 [2024-12-08 18:39:00.134759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.279 [2024-12-08 18:39:00.134770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.279 [2024-12-08 18:39:00.138875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.279 [2024-12-08 18:39:00.138905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.279 [2024-12-08 18:39:00.138915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.279 [2024-12-08 18:39:00.143100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.279 [2024-12-08 18:39:00.143132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.279 [2024-12-08 18:39:00.143143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.279 [2024-12-08 18:39:00.147259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.279 [2024-12-08 18:39:00.147291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.279 [2024-12-08 18:39:00.147301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.279 [2024-12-08 18:39:00.151399] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.279 [2024-12-08 18:39:00.151440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.279 [2024-12-08 18:39:00.151451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.279 [2024-12-08 18:39:00.155543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.279 [2024-12-08 18:39:00.155572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.279 [2024-12-08 18:39:00.155582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.279 [2024-12-08 18:39:00.159624] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.279 [2024-12-08 18:39:00.159656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.279 [2024-12-08 18:39:00.159666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.279 [2024-12-08 18:39:00.163756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.279 [2024-12-08 18:39:00.163806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.279 [2024-12-08 18:39:00.163844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.279 [2024-12-08 18:39:00.167972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.279 [2024-12-08 18:39:00.168004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.279 [2024-12-08 18:39:00.168014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.279 [2024-12-08 18:39:00.172077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.279 [2024-12-08 18:39:00.172110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.279 [2024-12-08 18:39:00.172122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.279 [2024-12-08 18:39:00.176407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.279 [2024-12-08 18:39:00.176463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.279 [2024-12-08 18:39:00.176474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.279 [2024-12-08 18:39:00.180645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.279 [2024-12-08 18:39:00.180674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.279 [2024-12-08 18:39:00.180685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.279 [2024-12-08 18:39:00.184756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.279 [2024-12-08 18:39:00.184785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.279 [2024-12-08 18:39:00.184795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.279 [2024-12-08 18:39:00.188924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.279 [2024-12-08 18:39:00.188954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.279 [2024-12-08 18:39:00.188964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.279 [2024-12-08 18:39:00.193117] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.279 [2024-12-08 18:39:00.193147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.279 [2024-12-08 18:39:00.193157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.279 [2024-12-08 18:39:00.197166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.279 [2024-12-08 18:39:00.197196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.279 [2024-12-08 18:39:00.197206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.279 [2024-12-08 18:39:00.201449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.279 [2024-12-08 18:39:00.201478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.279 [2024-12-08 18:39:00.201489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.541 [2024-12-08 18:39:00.205809] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.541 [2024-12-08 18:39:00.205854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.541 [2024-12-08 18:39:00.205865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.541 [2024-12-08 18:39:00.209949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.541 [2024-12-08 18:39:00.209997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.541 [2024-12-08 18:39:00.210008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.541 [2024-12-08 18:39:00.214256] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.541 [2024-12-08 18:39:00.214286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.541 [2024-12-08 18:39:00.214296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.541 [2024-12-08 18:39:00.218411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.541 [2024-12-08 18:39:00.218439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.541 [2024-12-08 18:39:00.218450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.541 [2024-12-08 18:39:00.222606] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.541 [2024-12-08 18:39:00.222635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.541 [2024-12-08 18:39:00.222645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.541 [2024-12-08 18:39:00.226740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.541 [2024-12-08 18:39:00.226770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.541 [2024-12-08 18:39:00.226780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.541 [2024-12-08 18:39:00.230928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.541 [2024-12-08 18:39:00.230957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.541 [2024-12-08 18:39:00.230967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.541 [2024-12-08 18:39:00.235028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.541 [2024-12-08 18:39:00.235058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.541 [2024-12-08 18:39:00.235069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.541 [2024-12-08 18:39:00.239232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.541 [2024-12-08 18:39:00.239262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.541 [2024-12-08 18:39:00.239272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.541 [2024-12-08 18:39:00.243366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.541 [2024-12-08 18:39:00.243396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.541 [2024-12-08 18:39:00.243420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.541 [2024-12-08 18:39:00.247485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.541 [2024-12-08 18:39:00.247514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.541 [2024-12-08 18:39:00.247524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.541 [2024-12-08 18:39:00.251599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.541 [2024-12-08 18:39:00.251628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.541 [2024-12-08 18:39:00.251639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.541 [2024-12-08 18:39:00.255637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.541 [2024-12-08 18:39:00.255666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.541 [2024-12-08 18:39:00.255675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.541 [2024-12-08 18:39:00.259647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.541 [2024-12-08 18:39:00.259676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.541 [2024-12-08 18:39:00.259686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.541 [2024-12-08 18:39:00.263765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.541 [2024-12-08 18:39:00.263802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.541 [2024-12-08 18:39:00.263829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.541 [2024-12-08 18:39:00.267773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.541 [2024-12-08 18:39:00.267850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.541 [2024-12-08 18:39:00.267863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.541 [2024-12-08 18:39:00.271978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.541 [2024-12-08 18:39:00.272009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.541 [2024-12-08 18:39:00.272020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.541 [2024-12-08 18:39:00.276095] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.541 [2024-12-08 18:39:00.276141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.541 [2024-12-08 18:39:00.276152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.541 [2024-12-08 18:39:00.280336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.541 [2024-12-08 18:39:00.280366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.541 [2024-12-08 18:39:00.280377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.541 [2024-12-08 18:39:00.284463] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.541 [2024-12-08 18:39:00.284492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.541 [2024-12-08 18:39:00.284502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.541 [2024-12-08 18:39:00.288646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.541 [2024-12-08 18:39:00.288676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.541 [2024-12-08 18:39:00.288687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.541 [2024-12-08 18:39:00.292771] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.541 [2024-12-08 18:39:00.292800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.541 [2024-12-08 18:39:00.292810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.541 [2024-12-08 18:39:00.296858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.541 [2024-12-08 18:39:00.296888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.541 [2024-12-08 18:39:00.296899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.541 [2024-12-08 18:39:00.300990] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.541 [2024-12-08 18:39:00.301020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.541 [2024-12-08 18:39:00.301030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.542 [2024-12-08 18:39:00.305155] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.542 [2024-12-08 18:39:00.305186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.542 [2024-12-08 18:39:00.305196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.542 [2024-12-08 18:39:00.309210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.542 [2024-12-08 18:39:00.309243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.542 [2024-12-08 18:39:00.309253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.542 [2024-12-08 18:39:00.313313] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.542 [2024-12-08 18:39:00.313345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.542 [2024-12-08 18:39:00.313355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.542 [2024-12-08 18:39:00.317397] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.542 [2024-12-08 18:39:00.317439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.542 [2024-12-08 18:39:00.317450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.542 [2024-12-08 18:39:00.321496] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.542 [2024-12-08 18:39:00.321525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.542 [2024-12-08 18:39:00.321535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.542 [2024-12-08 18:39:00.325620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.542 [2024-12-08 18:39:00.325650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.542 [2024-12-08 18:39:00.325660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.542 [2024-12-08 18:39:00.329715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.542 [2024-12-08 18:39:00.329744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.542 [2024-12-08 18:39:00.329755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.542 [2024-12-08 18:39:00.333802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.542 [2024-12-08 18:39:00.333832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.542 [2024-12-08 18:39:00.333843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.542 [2024-12-08 18:39:00.337924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.542 [2024-12-08 18:39:00.337953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.542 [2024-12-08 18:39:00.337963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.542 [2024-12-08 18:39:00.342090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.542 [2024-12-08 18:39:00.342119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.542 [2024-12-08 18:39:00.342130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.542 [2024-12-08 18:39:00.346250] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.542 [2024-12-08 18:39:00.346279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.542 [2024-12-08 18:39:00.346290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.542 [2024-12-08 18:39:00.350365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.542 [2024-12-08 18:39:00.350395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.542 [2024-12-08 18:39:00.350417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.542 [2024-12-08 18:39:00.354526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.542 [2024-12-08 18:39:00.354555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.542 [2024-12-08 18:39:00.354565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.542 [2024-12-08 18:39:00.358647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.542 [2024-12-08 18:39:00.358675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.542 [2024-12-08 18:39:00.358685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.542 [2024-12-08 18:39:00.362829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.542 [2024-12-08 18:39:00.362862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.542 [2024-12-08 18:39:00.362873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.542 [2024-12-08 18:39:00.366971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.542 [2024-12-08 18:39:00.367017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.542 [2024-12-08 18:39:00.367028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.542 [2024-12-08 18:39:00.371086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.542 [2024-12-08 18:39:00.371115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.542 [2024-12-08 18:39:00.371125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.542 [2024-12-08 18:39:00.375198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.542 [2024-12-08 18:39:00.375227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.542 [2024-12-08 18:39:00.375237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.542 [2024-12-08 18:39:00.379320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.542 [2024-12-08 18:39:00.379352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.542 [2024-12-08 18:39:00.379362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.542 [2024-12-08 18:39:00.383449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.542 [2024-12-08 18:39:00.383475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.542 [2024-12-08 18:39:00.383485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.542 [2024-12-08 18:39:00.387522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.542 [2024-12-08 18:39:00.387555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.542 [2024-12-08 18:39:00.387565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.542 [2024-12-08 18:39:00.391672] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.542 [2024-12-08 18:39:00.391702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.542 [2024-12-08 18:39:00.391712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.542 [2024-12-08 18:39:00.395764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.542 [2024-12-08 18:39:00.395801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.542 [2024-12-08 18:39:00.395828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.542 [2024-12-08 18:39:00.399925] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.542 [2024-12-08 18:39:00.399955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.542 [2024-12-08 18:39:00.399966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.542 [2024-12-08 18:39:00.404066] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.542 [2024-12-08 18:39:00.404125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.542 [2024-12-08 18:39:00.404136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.542 [2024-12-08 18:39:00.408259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.542 [2024-12-08 18:39:00.408288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.542 [2024-12-08 18:39:00.408299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.542 [2024-12-08 18:39:00.412400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.542 [2024-12-08 18:39:00.412441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.543 [2024-12-08 18:39:00.412452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.543 [2024-12-08 18:39:00.416496] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.543 [2024-12-08 18:39:00.416524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.543 [2024-12-08 18:39:00.416534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.543 [2024-12-08 18:39:00.420646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.543 [2024-12-08 18:39:00.420675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.543 [2024-12-08 18:39:00.420686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.543 [2024-12-08 18:39:00.424776] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.543 [2024-12-08 18:39:00.424806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.543 [2024-12-08 18:39:00.424816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.543 [2024-12-08 18:39:00.428950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.543 [2024-12-08 18:39:00.428980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.543 [2024-12-08 18:39:00.428990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.543 [2024-12-08 18:39:00.433055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.543 [2024-12-08 18:39:00.433085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.543 [2024-12-08 18:39:00.433096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.543 [2024-12-08 18:39:00.437144] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.543 [2024-12-08 18:39:00.437173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.543 [2024-12-08 18:39:00.437184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.543 [2024-12-08 18:39:00.441228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.543 [2024-12-08 18:39:00.441258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.543 [2024-12-08 18:39:00.441269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.543 [2024-12-08 18:39:00.445374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.543 [2024-12-08 18:39:00.445414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.543 [2024-12-08 18:39:00.445425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.543 [2024-12-08 18:39:00.449484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.543 [2024-12-08 18:39:00.449513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.543 [2024-12-08 18:39:00.449522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.543 [2024-12-08 18:39:00.453496] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.543 [2024-12-08 18:39:00.453524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.543 [2024-12-08 18:39:00.453535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.543 [2024-12-08 18:39:00.457644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.543 [2024-12-08 18:39:00.457675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.543 [2024-12-08 18:39:00.457685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.543 [2024-12-08 18:39:00.461777] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.543 [2024-12-08 18:39:00.461807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.543 [2024-12-08 18:39:00.461818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.543 [2024-12-08 18:39:00.466188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.543 [2024-12-08 18:39:00.466220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.543 [2024-12-08 18:39:00.466231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.804 [2024-12-08 18:39:00.470586] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.804 [2024-12-08 18:39:00.470617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.804 [2024-12-08 18:39:00.470627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.804 [2024-12-08 18:39:00.474972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.804 [2024-12-08 18:39:00.475004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.804 [2024-12-08 18:39:00.475029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.804 [2024-12-08 18:39:00.479209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.804 [2024-12-08 18:39:00.479240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.804 [2024-12-08 18:39:00.479250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.804 [2024-12-08 18:39:00.483346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.804 [2024-12-08 18:39:00.483376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.804 [2024-12-08 18:39:00.483386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.804 [2024-12-08 18:39:00.487508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.804 [2024-12-08 18:39:00.487531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.804 [2024-12-08 18:39:00.487541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.804 [2024-12-08 18:39:00.491748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.804 [2024-12-08 18:39:00.491780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.804 [2024-12-08 18:39:00.491791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.804 [2024-12-08 18:39:00.495923] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.804 [2024-12-08 18:39:00.495955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.804 [2024-12-08 18:39:00.495967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.804 [2024-12-08 18:39:00.500026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.804 [2024-12-08 18:39:00.500058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.804 [2024-12-08 18:39:00.500069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.804 [2024-12-08 18:39:00.504270] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.804 [2024-12-08 18:39:00.504301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.804 [2024-12-08 18:39:00.504312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.804 [2024-12-08 18:39:00.508411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.804 [2024-12-08 18:39:00.508452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.804 [2024-12-08 18:39:00.508462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.804 [2024-12-08 18:39:00.512562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.804 [2024-12-08 18:39:00.512592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.804 [2024-12-08 18:39:00.512602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.804 [2024-12-08 18:39:00.516634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.804 [2024-12-08 18:39:00.516663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.804 [2024-12-08 18:39:00.516674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.804 [2024-12-08 18:39:00.520812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.804 [2024-12-08 18:39:00.520842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.804 [2024-12-08 18:39:00.520852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.804 [2024-12-08 18:39:00.525009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.804 [2024-12-08 18:39:00.525042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.804 [2024-12-08 18:39:00.525052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.804 [2024-12-08 18:39:00.529148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.804 [2024-12-08 18:39:00.529178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.804 [2024-12-08 18:39:00.529188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.804 [2024-12-08 18:39:00.533329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.804 [2024-12-08 18:39:00.533359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.805 [2024-12-08 18:39:00.533370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.805 [2024-12-08 18:39:00.537436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.805 [2024-12-08 18:39:00.537465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.805 [2024-12-08 18:39:00.537475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.805 [2024-12-08 18:39:00.541561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.805 [2024-12-08 18:39:00.541590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.805 [2024-12-08 18:39:00.541600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.805 [2024-12-08 18:39:00.545613] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.805 [2024-12-08 18:39:00.545643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.805 [2024-12-08 18:39:00.545653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.805 [2024-12-08 18:39:00.549688] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.805 [2024-12-08 18:39:00.549717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.805 [2024-12-08 18:39:00.549727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.805 [2024-12-08 18:39:00.553777] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.805 [2024-12-08 18:39:00.553807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.805 [2024-12-08 18:39:00.553818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.805 [2024-12-08 18:39:00.557954] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.805 [2024-12-08 18:39:00.557985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.805 [2024-12-08 18:39:00.557997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.805 [2024-12-08 18:39:00.562137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.805 [2024-12-08 18:39:00.562169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.805 [2024-12-08 18:39:00.562180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.805 [2024-12-08 18:39:00.566290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.805 [2024-12-08 18:39:00.566320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.805 [2024-12-08 18:39:00.566331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.805 [2024-12-08 18:39:00.570481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.805 [2024-12-08 18:39:00.570513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.805 [2024-12-08 18:39:00.570523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.805 [2024-12-08 18:39:00.574578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.805 [2024-12-08 18:39:00.574607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.805 [2024-12-08 18:39:00.574617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.805 [2024-12-08 18:39:00.578728] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.805 [2024-12-08 18:39:00.578758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.805 [2024-12-08 18:39:00.578768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.805 [2024-12-08 18:39:00.582963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.805 [2024-12-08 18:39:00.582993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.805 [2024-12-08 18:39:00.583003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.805 [2024-12-08 18:39:00.587088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.805 [2024-12-08 18:39:00.587118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.805 [2024-12-08 18:39:00.587128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.805 [2024-12-08 18:39:00.591313] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.805 [2024-12-08 18:39:00.591343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.805 [2024-12-08 18:39:00.591354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.805 [2024-12-08 18:39:00.595456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.805 [2024-12-08 18:39:00.595485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.805 [2024-12-08 18:39:00.595495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.805 [2024-12-08 18:39:00.599562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.805 [2024-12-08 18:39:00.599591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.805 [2024-12-08 18:39:00.599602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.805 [2024-12-08 18:39:00.603629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.805 [2024-12-08 18:39:00.603659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.805 [2024-12-08 18:39:00.603669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.805 [2024-12-08 18:39:00.607726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.805 [2024-12-08 18:39:00.607771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.805 [2024-12-08 18:39:00.607782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.805 [2024-12-08 18:39:00.611949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.805 [2024-12-08 18:39:00.611980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.805 [2024-12-08 18:39:00.611990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.805 [2024-12-08 18:39:00.616098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.805 [2024-12-08 18:39:00.616157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.805 [2024-12-08 18:39:00.616168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.805 [2024-12-08 18:39:00.620259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.805 [2024-12-08 18:39:00.620289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.805 [2024-12-08 18:39:00.620299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.805 [2024-12-08 18:39:00.624436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.805 [2024-12-08 18:39:00.624475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.805 [2024-12-08 18:39:00.624486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.805 [2024-12-08 18:39:00.628642] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.805 [2024-12-08 18:39:00.628671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.805 [2024-12-08 18:39:00.628682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.805 [2024-12-08 18:39:00.632751] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.805 [2024-12-08 18:39:00.632780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.805 [2024-12-08 18:39:00.632790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.805 [2024-12-08 18:39:00.636825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.805 [2024-12-08 18:39:00.636855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.805 [2024-12-08 18:39:00.636865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.805 [2024-12-08 18:39:00.640987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.805 [2024-12-08 18:39:00.641017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.805 [2024-12-08 18:39:00.641028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.806 [2024-12-08 18:39:00.645074] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.806 [2024-12-08 18:39:00.645104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.806 [2024-12-08 18:39:00.645114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.806 [2024-12-08 18:39:00.649198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.806 [2024-12-08 18:39:00.649228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.806 [2024-12-08 18:39:00.649238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.806 [2024-12-08 18:39:00.653443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.806 [2024-12-08 18:39:00.653469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.806 [2024-12-08 18:39:00.653479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.806 [2024-12-08 18:39:00.658000] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.806 [2024-12-08 18:39:00.658030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.806 [2024-12-08 18:39:00.658040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.806 [2024-12-08 18:39:00.662466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.806 [2024-12-08 18:39:00.662510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.806 [2024-12-08 18:39:00.662521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.806 [2024-12-08 18:39:00.666954] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.806 [2024-12-08 18:39:00.667001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.806 [2024-12-08 18:39:00.667012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.806 [2024-12-08 18:39:00.671627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.806 [2024-12-08 18:39:00.671690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.806 [2024-12-08 18:39:00.671702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.806 [2024-12-08 18:39:00.676210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.806 [2024-12-08 18:39:00.676240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.806 [2024-12-08 18:39:00.676251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.806 [2024-12-08 18:39:00.680788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.806 [2024-12-08 18:39:00.680822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.806 [2024-12-08 18:39:00.680831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.806 [2024-12-08 18:39:00.685212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.806 [2024-12-08 18:39:00.685242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.806 [2024-12-08 18:39:00.685253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.806 [2024-12-08 18:39:00.689565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.806 [2024-12-08 18:39:00.689595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.806 [2024-12-08 18:39:00.689605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.806 [2024-12-08 18:39:00.693870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.806 [2024-12-08 18:39:00.693900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.806 [2024-12-08 18:39:00.693911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.806 [2024-12-08 18:39:00.698177] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.806 [2024-12-08 18:39:00.698206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.806 [2024-12-08 18:39:00.698217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.806 [2024-12-08 18:39:00.702535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.806 [2024-12-08 18:39:00.702567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.806 [2024-12-08 18:39:00.702577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.806 [2024-12-08 18:39:00.706734] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.806 [2024-12-08 18:39:00.706763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.806 [2024-12-08 18:39:00.706773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.806 [2024-12-08 18:39:00.710899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.806 [2024-12-08 18:39:00.710928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.806 [2024-12-08 18:39:00.710938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.806 [2024-12-08 18:39:00.715141] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.806 [2024-12-08 18:39:00.715171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.806 [2024-12-08 18:39:00.715181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.806 [2024-12-08 18:39:00.719379] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.806 [2024-12-08 18:39:00.719421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.806 [2024-12-08 18:39:00.719432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.806 [2024-12-08 18:39:00.723657] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.806 [2024-12-08 18:39:00.723686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.806 [2024-12-08 18:39:00.723696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.806 [2024-12-08 18:39:00.727973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:42.806 [2024-12-08 18:39:00.728007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.806 [2024-12-08 18:39:00.728018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:43.066 [2024-12-08 18:39:00.732537] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:43.066 [2024-12-08 18:39:00.732582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.066 [2024-12-08 18:39:00.732608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:43.066 [2024-12-08 18:39:00.736831] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:43.066 [2024-12-08 18:39:00.736892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.066 [2024-12-08 18:39:00.736904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:43.066 [2024-12-08 18:39:00.741314] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:43.066 [2024-12-08 18:39:00.741344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.066 [2024-12-08 18:39:00.741355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:43.066 7277.50 IOPS, 909.69 MiB/s [2024-12-08T18:39:00.996Z] [2024-12-08 18:39:00.746690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1856f50) 00:21:43.066 [2024-12-08 18:39:00.746720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.066 [2024-12-08 18:39:00.746730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:43.066 00:21:43.066 Latency(us) 00:21:43.066 [2024-12-08T18:39:00.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.066 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:43.067 nvme0n1 : 2.00 7278.94 909.87 0.00 0.00 2195.09 1936.29 9830.40 00:21:43.067 [2024-12-08T18:39:00.997Z] =================================================================================================================== 00:21:43.067 [2024-12-08T18:39:00.997Z] Total : 7278.94 909.87 0.00 0.00 2195.09 1936.29 9830.40 00:21:43.067 { 00:21:43.067 "results": [ 00:21:43.067 { 00:21:43.067 "job": "nvme0n1", 00:21:43.067 "core_mask": "0x2", 00:21:43.067 "workload": "randread", 00:21:43.067 "status": "finished", 00:21:43.067 "queue_depth": 16, 00:21:43.067 "io_size": 131072, 00:21:43.067 "runtime": 2.001802, 00:21:43.067 "iops": 7278.94167355213, 00:21:43.067 "mibps": 909.8677091940162, 00:21:43.067 "io_failed": 0, 00:21:43.067 "io_timeout": 0, 00:21:43.067 "avg_latency_us": 2195.086575202301, 00:21:43.067 "min_latency_us": 1936.290909090909, 00:21:43.067 "max_latency_us": 9830.4 00:21:43.067 } 00:21:43.067 ], 00:21:43.067 "core_count": 1 00:21:43.067 } 00:21:43.067 18:39:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:43.067 18:39:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:43.067 18:39:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:43.067 | .driver_specific 00:21:43.067 | .nvme_error 00:21:43.067 | .status_code 00:21:43.067 | .command_transient_transport_error' 00:21:43.067 18:39:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:43.330 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 470 > 0 )) 00:21:43.330 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94849 00:21:43.330 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 94849 ']' 00:21:43.330 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 94849 00:21:43.330 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:43.330 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:43.330 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94849 00:21:43.330 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:43.330 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:43.330 killing process with pid 94849 00:21:43.330 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94849' 00:21:43.330 Received shutdown signal, test time was about 2.000000 seconds 00:21:43.330 00:21:43.330 Latency(us) 00:21:43.330 [2024-12-08T18:39:01.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.330 [2024-12-08T18:39:01.260Z] =================================================================================================================== 00:21:43.330 [2024-12-08T18:39:01.260Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:43.330 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 94849 00:21:43.330 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 94849 00:21:43.588 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:21:43.588 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:43.588 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:43.588 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:43.588 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:43.588 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94906 00:21:43.588 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94906 /var/tmp/bperf.sock 00:21:43.588 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:21:43.588 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 94906 ']' 00:21:43.588 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:43.588 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:43.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:43.588 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:43.588 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:43.588 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:43.588 [2024-12-08 18:39:01.392502] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:43.588 [2024-12-08 18:39:01.392586] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94906 ] 00:21:43.847 [2024-12-08 18:39:01.523045] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.847 [2024-12-08 18:39:01.576630] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.847 [2024-12-08 18:39:01.645181] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:43.847 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:43.847 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:43.847 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:43.847 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:44.105 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:44.105 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.105 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:44.105 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.105 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:44.105 18:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:44.363 nvme0n1 00:21:44.622 18:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:44.622 18:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.622 18:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:44.622 18:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.622 18:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:44.622 18:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:44.622 Running I/O for 2 seconds... 00:21:44.622 [2024-12-08 18:39:02.417058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198fef90 00:21:44.622 [2024-12-08 18:39:02.419262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.622 [2024-12-08 18:39:02.419318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.622 [2024-12-08 18:39:02.430263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198feb58 00:21:44.622 [2024-12-08 18:39:02.432380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.622 [2024-12-08 18:39:02.432422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:44.622 [2024-12-08 18:39:02.443238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198fe2e8 00:21:44.622 [2024-12-08 18:39:02.445523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.622 [2024-12-08 18:39:02.445552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:44.622 [2024-12-08 18:39:02.456297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198fda78 00:21:44.622 [2024-12-08 18:39:02.458540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.622 [2024-12-08 18:39:02.458848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:44.622 [2024-12-08 18:39:02.469738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198fd208 00:21:44.622 [2024-12-08 18:39:02.471950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.622 [2024-12-08 18:39:02.472201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:44.622 [2024-12-08 18:39:02.483094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198fc998 00:21:44.622 [2024-12-08 18:39:02.485287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.622 [2024-12-08 18:39:02.485476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:44.622 [2024-12-08 18:39:02.496382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198fc128 00:21:44.622 [2024-12-08 18:39:02.498534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.622 [2024-12-08 18:39:02.498719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:44.622 [2024-12-08 18:39:02.509968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198fb8b8 00:21:44.622 [2024-12-08 18:39:02.512164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.622 [2024-12-08 18:39:02.512338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:44.622 [2024-12-08 18:39:02.523297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198fb048 00:21:44.622 [2024-12-08 18:39:02.525429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.622 [2024-12-08 18:39:02.525599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:44.622 [2024-12-08 18:39:02.536521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198fa7d8 00:21:44.622 [2024-12-08 18:39:02.538605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.622 [2024-12-08 18:39:02.538777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:44.622 [2024-12-08 18:39:02.549904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f9f68 00:21:44.881 [2024-12-08 18:39:02.552004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.881 [2024-12-08 18:39:02.552211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:44.881 [2024-12-08 18:39:02.563708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f96f8 00:21:44.881 [2024-12-08 18:39:02.565777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.881 [2024-12-08 18:39:02.565947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:44.881 [2024-12-08 18:39:02.577009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f8e88 00:21:44.881 [2024-12-08 18:39:02.579105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.881 [2024-12-08 18:39:02.579257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:44.881 [2024-12-08 18:39:02.590266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f8618 00:21:44.881 [2024-12-08 18:39:02.592277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.881 [2024-12-08 18:39:02.592305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:44.881 [2024-12-08 18:39:02.603159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f7da8 00:21:44.881 [2024-12-08 18:39:02.605125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.881 [2024-12-08 18:39:02.605157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:44.881 [2024-12-08 18:39:02.616068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f7538 00:21:44.881 [2024-12-08 18:39:02.618034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.881 [2024-12-08 18:39:02.618059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:44.882 [2024-12-08 18:39:02.628968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f6cc8 00:21:44.882 [2024-12-08 18:39:02.630847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.882 [2024-12-08 18:39:02.630878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:44.882 [2024-12-08 18:39:02.641772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f6458 00:21:44.882 [2024-12-08 18:39:02.643822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.882 [2024-12-08 18:39:02.643853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:44.882 [2024-12-08 18:39:02.654754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f5be8 00:21:44.882 [2024-12-08 18:39:02.656574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.882 [2024-12-08 18:39:02.656717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:44.882 [2024-12-08 18:39:02.667723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f5378 00:21:44.882 [2024-12-08 18:39:02.669522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.882 [2024-12-08 18:39:02.669554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:44.882 [2024-12-08 18:39:02.680430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f4b08 00:21:44.882 [2024-12-08 18:39:02.682204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.882 [2024-12-08 18:39:02.682235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:44.882 [2024-12-08 18:39:02.693148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f4298 00:21:44.882 [2024-12-08 18:39:02.694972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.882 [2024-12-08 18:39:02.695002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:44.882 [2024-12-08 18:39:02.705931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f3a28 00:21:44.882 [2024-12-08 18:39:02.707712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.882 [2024-12-08 18:39:02.707881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:44.882 [2024-12-08 18:39:02.718854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f31b8 00:21:44.882 [2024-12-08 18:39:02.720762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.882 [2024-12-08 18:39:02.720790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:44.882 [2024-12-08 18:39:02.731746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f2948 00:21:44.882 [2024-12-08 18:39:02.733520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.882 [2024-12-08 18:39:02.733552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:44.882 [2024-12-08 18:39:02.744558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f20d8 00:21:44.882 [2024-12-08 18:39:02.746246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.882 [2024-12-08 18:39:02.746276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:44.882 [2024-12-08 18:39:02.757263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f1868 00:21:44.882 [2024-12-08 18:39:02.759225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.882 [2024-12-08 18:39:02.759256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:44.882 [2024-12-08 18:39:02.770525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f0ff8 00:21:44.882 [2024-12-08 18:39:02.772251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.882 [2024-12-08 18:39:02.772282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:44.882 [2024-12-08 18:39:02.783398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f0788 00:21:44.882 [2024-12-08 18:39:02.785105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.882 [2024-12-08 18:39:02.785136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:44.882 [2024-12-08 18:39:02.796250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198eff18 00:21:44.882 [2024-12-08 18:39:02.797960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.882 [2024-12-08 18:39:02.797991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:45.141 [2024-12-08 18:39:02.809300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198ef6a8 00:21:45.141 [2024-12-08 18:39:02.811193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.141 [2024-12-08 18:39:02.811224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:45.141 [2024-12-08 18:39:02.822882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198eee38 00:21:45.141 [2024-12-08 18:39:02.824543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.141 [2024-12-08 18:39:02.824575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:45.141 [2024-12-08 18:39:02.835768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198ee5c8 00:21:45.141 [2024-12-08 18:39:02.837620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.141 [2024-12-08 18:39:02.837651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:45.141 [2024-12-08 18:39:02.848821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198edd58 00:21:45.141 [2024-12-08 18:39:02.850388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.141 [2024-12-08 18:39:02.850440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:45.141 [2024-12-08 18:39:02.861539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198ed4e8 00:21:45.141 [2024-12-08 18:39:02.863363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.142 [2024-12-08 18:39:02.863393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:45.142 [2024-12-08 18:39:02.874642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198ecc78 00:21:45.142 [2024-12-08 18:39:02.876290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.142 [2024-12-08 18:39:02.876322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:45.142 [2024-12-08 18:39:02.887516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198ec408 00:21:45.142 [2024-12-08 18:39:02.889291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.142 [2024-12-08 18:39:02.889322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:45.142 [2024-12-08 18:39:02.900547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198ebb98 00:21:45.142 [2024-12-08 18:39:02.902054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.142 [2024-12-08 18:39:02.902084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:45.142 [2024-12-08 18:39:02.913262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198eb328 00:21:45.142 [2024-12-08 18:39:02.914899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.142 [2024-12-08 18:39:02.914930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:45.142 [2024-12-08 18:39:02.926225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198eaab8 00:21:45.142 [2024-12-08 18:39:02.927837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.142 [2024-12-08 18:39:02.927868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:45.142 [2024-12-08 18:39:02.939153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198ea248 00:21:45.142 [2024-12-08 18:39:02.940700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.142 [2024-12-08 18:39:02.940730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:45.142 [2024-12-08 18:39:02.952248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e99d8 00:21:45.142 [2024-12-08 18:39:02.953897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.142 [2024-12-08 18:39:02.953923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:45.142 [2024-12-08 18:39:02.965260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e9168 00:21:45.142 [2024-12-08 18:39:02.966869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.142 [2024-12-08 18:39:02.966899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:45.142 [2024-12-08 18:39:02.978276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e88f8 00:21:45.142 [2024-12-08 18:39:02.979886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.142 [2024-12-08 18:39:02.979919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:45.142 [2024-12-08 18:39:02.991646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e8088 00:21:45.142 [2024-12-08 18:39:02.993173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.142 [2024-12-08 18:39:02.993206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:45.142 [2024-12-08 18:39:03.006113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e7818 00:21:45.142 [2024-12-08 18:39:03.007893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.142 [2024-12-08 18:39:03.007925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:45.142 [2024-12-08 18:39:03.020382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e6fa8 00:21:45.142 [2024-12-08 18:39:03.021971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.142 [2024-12-08 18:39:03.022001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:45.142 [2024-12-08 18:39:03.033665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e6738 00:21:45.142 [2024-12-08 18:39:03.035023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.142 [2024-12-08 18:39:03.035053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:45.142 [2024-12-08 18:39:03.046487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e5ec8 00:21:45.142 [2024-12-08 18:39:03.047891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.142 [2024-12-08 18:39:03.047922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:45.142 [2024-12-08 18:39:03.059287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e5658 00:21:45.142 [2024-12-08 18:39:03.060768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.142 [2024-12-08 18:39:03.060908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:45.401 [2024-12-08 18:39:03.072617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e4de8 00:21:45.401 [2024-12-08 18:39:03.073928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.401 [2024-12-08 18:39:03.073961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:45.401 [2024-12-08 18:39:03.085985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e4578 00:21:45.401 [2024-12-08 18:39:03.087447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.401 [2024-12-08 18:39:03.087480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:45.401 [2024-12-08 18:39:03.099930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e3d08 00:21:45.401 [2024-12-08 18:39:03.101373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.401 [2024-12-08 18:39:03.101428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:45.401 [2024-12-08 18:39:03.114126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e3498 00:21:45.401 [2024-12-08 18:39:03.115681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.401 [2024-12-08 18:39:03.115706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:45.401 [2024-12-08 18:39:03.127547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e2c28 00:21:45.401 [2024-12-08 18:39:03.128885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.401 [2024-12-08 18:39:03.128916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:45.401 [2024-12-08 18:39:03.140831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e23b8 00:21:45.401 [2024-12-08 18:39:03.142287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.401 [2024-12-08 18:39:03.142315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:45.401 [2024-12-08 18:39:03.154165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e1b48 00:21:45.401 [2024-12-08 18:39:03.155494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.401 [2024-12-08 18:39:03.155524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:45.401 [2024-12-08 18:39:03.167053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e12d8 00:21:45.401 [2024-12-08 18:39:03.168342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.401 [2024-12-08 18:39:03.168374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:45.401 [2024-12-08 18:39:03.180245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e0a68 00:21:45.401 [2024-12-08 18:39:03.181675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.401 [2024-12-08 18:39:03.181701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:45.401 [2024-12-08 18:39:03.193454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e01f8 00:21:45.401 [2024-12-08 18:39:03.194700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.401 [2024-12-08 18:39:03.194856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:45.401 [2024-12-08 18:39:03.206515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198df988 00:21:45.401 [2024-12-08 18:39:03.207944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.401 [2024-12-08 18:39:03.207978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:45.401 [2024-12-08 18:39:03.219880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198df118 00:21:45.401 [2024-12-08 18:39:03.221091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.401 [2024-12-08 18:39:03.221122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:45.401 [2024-12-08 18:39:03.232978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198de8a8 00:21:45.401 [2024-12-08 18:39:03.234181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.401 [2024-12-08 18:39:03.234322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:45.401 [2024-12-08 18:39:03.246119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198de038 00:21:45.401 [2024-12-08 18:39:03.247303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.401 [2024-12-08 18:39:03.247334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:45.401 [2024-12-08 18:39:03.264573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198de038 00:21:45.401 [2024-12-08 18:39:03.266712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.401 [2024-12-08 18:39:03.266743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.401 [2024-12-08 18:39:03.277752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198de8a8 00:21:45.401 [2024-12-08 18:39:03.279899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.402 [2024-12-08 18:39:03.279933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:45.402 [2024-12-08 18:39:03.290985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198df118 00:21:45.402 [2024-12-08 18:39:03.293155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.402 [2024-12-08 18:39:03.293195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:45.402 [2024-12-08 18:39:03.303880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198df988 00:21:45.402 [2024-12-08 18:39:03.306217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.402 [2024-12-08 18:39:03.306256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:45.402 [2024-12-08 18:39:03.317194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e01f8 00:21:45.402 [2024-12-08 18:39:03.319209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.402 [2024-12-08 18:39:03.319236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:45.660 [2024-12-08 18:39:03.330246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e0a68 00:21:45.660 [2024-12-08 18:39:03.332378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.660 [2024-12-08 18:39:03.332416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:45.660 [2024-12-08 18:39:03.343448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e12d8 00:21:45.660 [2024-12-08 18:39:03.345558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.660 [2024-12-08 18:39:03.345585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:45.661 [2024-12-08 18:39:03.356483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e1b48 00:21:45.661 [2024-12-08 18:39:03.358439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.661 [2024-12-08 18:39:03.358466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:45.661 [2024-12-08 18:39:03.369563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e23b8 00:21:45.661 [2024-12-08 18:39:03.371528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.661 [2024-12-08 18:39:03.371557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:45.661 [2024-12-08 18:39:03.382440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e2c28 00:21:45.661 [2024-12-08 18:39:03.384461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.661 [2024-12-08 18:39:03.384489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:45.661 [2024-12-08 18:39:03.395297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e3498 00:21:45.661 [2024-12-08 18:39:03.398109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.661 [2024-12-08 18:39:03.398137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:45.661 19104.00 IOPS, 74.62 MiB/s [2024-12-08T18:39:03.591Z] [2024-12-08 18:39:03.409085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e3d08 00:21:45.661 [2024-12-08 18:39:03.411028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.661 [2024-12-08 18:39:03.411056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:45.661 [2024-12-08 18:39:03.421973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e4578 00:21:45.661 [2024-12-08 18:39:03.423934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.661 [2024-12-08 18:39:03.423962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:45.661 [2024-12-08 18:39:03.434827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e4de8 00:21:45.661 [2024-12-08 18:39:03.436744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.661 [2024-12-08 18:39:03.436771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:45.661 [2024-12-08 18:39:03.447567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e5658 00:21:45.661 [2024-12-08 18:39:03.449504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.661 [2024-12-08 18:39:03.449532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:45.661 [2024-12-08 18:39:03.460454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e5ec8 00:21:45.661 [2024-12-08 18:39:03.462278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.661 [2024-12-08 18:39:03.462305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.661 [2024-12-08 18:39:03.473144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e6738 00:21:45.661 [2024-12-08 18:39:03.475022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.661 [2024-12-08 18:39:03.475050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:45.661 [2024-12-08 18:39:03.485910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e6fa8 00:21:45.661 [2024-12-08 18:39:03.487714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.661 [2024-12-08 18:39:03.487753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:45.661 [2024-12-08 18:39:03.499009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e7818 00:21:45.661 [2024-12-08 18:39:03.500912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.661 [2024-12-08 18:39:03.500940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:45.661 [2024-12-08 18:39:03.512059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e8088 00:21:45.661 [2024-12-08 18:39:03.513888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.661 [2024-12-08 18:39:03.513915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:45.661 [2024-12-08 18:39:03.524971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e88f8 00:21:45.661 [2024-12-08 18:39:03.526735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.661 [2024-12-08 18:39:03.526762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:45.661 [2024-12-08 18:39:03.537775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e9168 00:21:45.661 [2024-12-08 18:39:03.539544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.661 [2024-12-08 18:39:03.539571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:45.661 [2024-12-08 18:39:03.550548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198e99d8 00:21:45.661 [2024-12-08 18:39:03.552360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.661 [2024-12-08 18:39:03.552388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:45.661 [2024-12-08 18:39:03.563306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198ea248 00:21:45.661 [2024-12-08 18:39:03.565096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.661 [2024-12-08 18:39:03.565135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:45.661 [2024-12-08 18:39:03.576162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198eaab8 00:21:45.661 [2024-12-08 18:39:03.577869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.661 [2024-12-08 18:39:03.577895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:45.661 [2024-12-08 18:39:03.589144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198eb328 00:21:45.920 [2024-12-08 18:39:03.590836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.920 [2024-12-08 18:39:03.590863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:45.920 [2024-12-08 18:39:03.602301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198ebb98 00:21:45.920 [2024-12-08 18:39:03.604049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.920 [2024-12-08 18:39:03.604089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:45.920 [2024-12-08 18:39:03.615183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198ec408 00:21:45.920 [2024-12-08 18:39:03.616900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.920 [2024-12-08 18:39:03.616926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:45.920 [2024-12-08 18:39:03.628064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198ecc78 00:21:45.920 [2024-12-08 18:39:03.629770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.920 [2024-12-08 18:39:03.629809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:45.920 [2024-12-08 18:39:03.640888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198ed4e8 00:21:45.920 [2024-12-08 18:39:03.642551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.920 [2024-12-08 18:39:03.642577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:45.920 [2024-12-08 18:39:03.653608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198edd58 00:21:45.920 [2024-12-08 18:39:03.655250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.920 [2024-12-08 18:39:03.655277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:45.920 [2024-12-08 18:39:03.666418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198ee5c8 00:21:45.920 [2024-12-08 18:39:03.668019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.920 [2024-12-08 18:39:03.668057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.920 [2024-12-08 18:39:03.679114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198eee38 00:21:45.920 [2024-12-08 18:39:03.680742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.920 [2024-12-08 18:39:03.680769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:45.920 [2024-12-08 18:39:03.691942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198ef6a8 00:21:45.920 [2024-12-08 18:39:03.693567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.920 [2024-12-08 18:39:03.693606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:45.920 [2024-12-08 18:39:03.704856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198eff18 00:21:45.920 [2024-12-08 18:39:03.706434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.920 [2024-12-08 18:39:03.706461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:45.920 [2024-12-08 18:39:03.717571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f0788 00:21:45.920 [2024-12-08 18:39:03.719142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.920 [2024-12-08 18:39:03.719169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:45.920 [2024-12-08 18:39:03.730427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f0ff8 00:21:45.920 [2024-12-08 18:39:03.731948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.920 [2024-12-08 18:39:03.731986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:45.920 [2024-12-08 18:39:03.743122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f1868 00:21:45.920 [2024-12-08 18:39:03.744677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.920 [2024-12-08 18:39:03.744714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:45.921 [2024-12-08 18:39:03.755871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f20d8 00:21:45.921 [2024-12-08 18:39:03.757394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.921 [2024-12-08 18:39:03.757428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:45.921 [2024-12-08 18:39:03.768700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f2948 00:21:45.921 [2024-12-08 18:39:03.770196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.921 [2024-12-08 18:39:03.770222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:45.921 [2024-12-08 18:39:03.781450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f31b8 00:21:45.921 [2024-12-08 18:39:03.782944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.921 [2024-12-08 18:39:03.782982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:45.921 [2024-12-08 18:39:03.794222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f3a28 00:21:45.921 [2024-12-08 18:39:03.795685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.921 [2024-12-08 18:39:03.795712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:45.921 [2024-12-08 18:39:03.807015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f4298 00:21:45.921 [2024-12-08 18:39:03.808559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.921 [2024-12-08 18:39:03.808598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:45.921 [2024-12-08 18:39:03.819938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f4b08 00:21:45.921 [2024-12-08 18:39:03.821384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.921 [2024-12-08 18:39:03.821421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:45.921 [2024-12-08 18:39:03.832687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f5378 00:21:45.921 [2024-12-08 18:39:03.834124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.921 [2024-12-08 18:39:03.834151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:45.921 [2024-12-08 18:39:03.845428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f5be8 00:21:45.921 [2024-12-08 18:39:03.846852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.921 [2024-12-08 18:39:03.846880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:46.180 [2024-12-08 18:39:03.858649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f6458 00:21:46.180 [2024-12-08 18:39:03.860079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.180 [2024-12-08 18:39:03.860120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:46.180 [2024-12-08 18:39:03.871537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f6cc8 00:21:46.180 [2024-12-08 18:39:03.872930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.180 [2024-12-08 18:39:03.872957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.180 [2024-12-08 18:39:03.884261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f7538 00:21:46.180 [2024-12-08 18:39:03.885649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.180 [2024-12-08 18:39:03.885688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:46.180 [2024-12-08 18:39:03.897128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f7da8 00:21:46.180 [2024-12-08 18:39:03.898487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.180 [2024-12-08 18:39:03.898526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:46.180 [2024-12-08 18:39:03.909916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f8618 00:21:46.180 [2024-12-08 18:39:03.911223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.180 [2024-12-08 18:39:03.911251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:46.180 [2024-12-08 18:39:03.922629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f8e88 00:21:46.180 [2024-12-08 18:39:03.923927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.180 [2024-12-08 18:39:03.923967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:46.180 [2024-12-08 18:39:03.935380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f96f8 00:21:46.180 [2024-12-08 18:39:03.936709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.180 [2024-12-08 18:39:03.936736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:46.180 [2024-12-08 18:39:03.948118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f9f68 00:21:46.180 [2024-12-08 18:39:03.949390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.180 [2024-12-08 18:39:03.949438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:46.180 [2024-12-08 18:39:03.960918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198fa7d8 00:21:46.180 [2024-12-08 18:39:03.962194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.180 [2024-12-08 18:39:03.962220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:46.180 [2024-12-08 18:39:03.973639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198fb048 00:21:46.180 [2024-12-08 18:39:03.974903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.180 [2024-12-08 18:39:03.974930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:46.180 [2024-12-08 18:39:03.986418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198fb8b8 00:21:46.180 [2024-12-08 18:39:03.987632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.180 [2024-12-08 18:39:03.987659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:46.180 [2024-12-08 18:39:03.999108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198fc128 00:21:46.180 [2024-12-08 18:39:04.000351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.180 [2024-12-08 18:39:04.000378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:46.180 [2024-12-08 18:39:04.012221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198fc998 00:21:46.180 [2024-12-08 18:39:04.013564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.180 [2024-12-08 18:39:04.013603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:46.180 [2024-12-08 18:39:04.026229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198fd208 00:21:46.180 [2024-12-08 18:39:04.027707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.180 [2024-12-08 18:39:04.027734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:46.180 [2024-12-08 18:39:04.040639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198fda78 00:21:46.180 [2024-12-08 18:39:04.041927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.180 [2024-12-08 18:39:04.041953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:46.180 [2024-12-08 18:39:04.053966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198fe2e8 00:21:46.180 [2024-12-08 18:39:04.055102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.180 [2024-12-08 18:39:04.055128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:46.180 [2024-12-08 18:39:04.066790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198feb58 00:21:46.180 [2024-12-08 18:39:04.067937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.180 [2024-12-08 18:39:04.067967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:46.180 [2024-12-08 18:39:04.084917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198fef90 00:21:46.180 [2024-12-08 18:39:04.086982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.180 [2024-12-08 18:39:04.087011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.180 [2024-12-08 18:39:04.097658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198feb58 00:21:46.180 [2024-12-08 18:39:04.099707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.180 [2024-12-08 18:39:04.099746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.439 [2024-12-08 18:39:04.110757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198fe2e8 00:21:46.439 [2024-12-08 18:39:04.112839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.439 [2024-12-08 18:39:04.112867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:46.439 [2024-12-08 18:39:04.123776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198fda78 00:21:46.439 [2024-12-08 18:39:04.125864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.439 [2024-12-08 18:39:04.125891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:46.439 [2024-12-08 18:39:04.136697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198fd208 00:21:46.439 [2024-12-08 18:39:04.138703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.439 [2024-12-08 18:39:04.138730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:46.439 [2024-12-08 18:39:04.149478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198fc998 00:21:46.439 [2024-12-08 18:39:04.151464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.439 [2024-12-08 18:39:04.151504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:46.439 [2024-12-08 18:39:04.162392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198fc128 00:21:46.439 [2024-12-08 18:39:04.164447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.439 [2024-12-08 18:39:04.164475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:46.439 [2024-12-08 18:39:04.175210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198fb8b8 00:21:46.439 [2024-12-08 18:39:04.177267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.439 [2024-12-08 18:39:04.177306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:46.439 [2024-12-08 18:39:04.188094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198fb048 00:21:46.439 [2024-12-08 18:39:04.190040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.439 [2024-12-08 18:39:04.190066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:46.439 [2024-12-08 18:39:04.200801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198fa7d8 00:21:46.439 [2024-12-08 18:39:04.202731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.439 [2024-12-08 18:39:04.202758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:46.439 [2024-12-08 18:39:04.213551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f9f68 00:21:46.439 [2024-12-08 18:39:04.215460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.439 [2024-12-08 18:39:04.215499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:46.439 [2024-12-08 18:39:04.226327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f96f8 00:21:46.439 [2024-12-08 18:39:04.228236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.439 [2024-12-08 18:39:04.228262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:46.439 [2024-12-08 18:39:04.239199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f8e88 00:21:46.439 [2024-12-08 18:39:04.241150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.439 [2024-12-08 18:39:04.241170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:46.439 [2024-12-08 18:39:04.254255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f8618 00:21:46.439 [2024-12-08 18:39:04.256254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.439 [2024-12-08 18:39:04.256490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:46.439 [2024-12-08 18:39:04.267355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f7da8 00:21:46.439 [2024-12-08 18:39:04.269392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.439 [2024-12-08 18:39:04.269500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:46.439 [2024-12-08 18:39:04.280578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f7538 00:21:46.439 [2024-12-08 18:39:04.282559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.439 [2024-12-08 18:39:04.282675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:46.439 [2024-12-08 18:39:04.293660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f6cc8 00:21:46.439 [2024-12-08 18:39:04.295608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.439 [2024-12-08 18:39:04.295720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:46.439 [2024-12-08 18:39:04.307206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f6458 00:21:46.439 [2024-12-08 18:39:04.309472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.439 [2024-12-08 18:39:04.309515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:46.439 [2024-12-08 18:39:04.321208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f5be8 00:21:46.439 [2024-12-08 18:39:04.323595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.439 [2024-12-08 18:39:04.323636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:46.439 [2024-12-08 18:39:04.335532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f5378 00:21:46.439 [2024-12-08 18:39:04.337682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.439 [2024-12-08 18:39:04.337724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:46.439 [2024-12-08 18:39:04.349272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f4b08 00:21:46.439 [2024-12-08 18:39:04.351226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.439 [2024-12-08 18:39:04.351268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:46.439 [2024-12-08 18:39:04.362593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f4298 00:21:46.439 [2024-12-08 18:39:04.364533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.439 [2024-12-08 18:39:04.364576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:46.697 [2024-12-08 18:39:04.376423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f3a28 00:21:46.698 [2024-12-08 18:39:04.378278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.698 [2024-12-08 18:39:04.378320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:46.698 [2024-12-08 18:39:04.389617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8210) with pdu=0x2000198f31b8 00:21:46.698 [2024-12-08 18:39:04.391472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.698 [2024-12-08 18:39:04.391513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:46.698 19292.50 IOPS, 75.36 MiB/s 00:21:46.698 Latency(us) 00:21:46.698 [2024-12-08T18:39:04.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.698 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:46.698 nvme0n1 : 2.00 19335.34 75.53 0.00 0.00 6615.00 5153.51 25022.84 00:21:46.698 [2024-12-08T18:39:04.628Z] =================================================================================================================== 00:21:46.698 [2024-12-08T18:39:04.628Z] Total : 19335.34 75.53 0.00 0.00 6615.00 5153.51 25022.84 00:21:46.698 { 00:21:46.698 "results": [ 00:21:46.698 { 00:21:46.698 "job": "nvme0n1", 00:21:46.698 "core_mask": "0x2", 00:21:46.698 "workload": "randwrite", 00:21:46.698 "status": "finished", 00:21:46.698 "queue_depth": 128, 00:21:46.698 "io_size": 4096, 00:21:46.698 "runtime": 2.002189, 00:21:46.698 "iops": 19335.337473135653, 00:21:46.698 "mibps": 75.52866200443614, 00:21:46.698 "io_failed": 0, 00:21:46.698 "io_timeout": 0, 00:21:46.698 "avg_latency_us": 6615.0026318619775, 00:21:46.698 "min_latency_us": 5153.512727272728, 00:21:46.698 "max_latency_us": 25022.836363636365 00:21:46.698 } 00:21:46.698 ], 00:21:46.698 "core_count": 1 00:21:46.698 } 00:21:46.698 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:46.698 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:46.698 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:46.698 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:46.698 | .driver_specific 00:21:46.698 | .nvme_error 00:21:46.698 | .status_code 00:21:46.698 | .command_transient_transport_error' 00:21:46.956 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 151 > 0 )) 00:21:46.956 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94906 00:21:46.956 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 94906 ']' 00:21:46.956 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 94906 00:21:46.956 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:46.956 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:46.956 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94906 00:21:46.956 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:46.956 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:46.956 killing process with pid 94906 00:21:46.956 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94906' 00:21:46.956 Received shutdown signal, test time was about 2.000000 seconds 00:21:46.956 00:21:46.956 Latency(us) 00:21:46.956 [2024-12-08T18:39:04.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.956 [2024-12-08T18:39:04.886Z] =================================================================================================================== 00:21:46.956 [2024-12-08T18:39:04.886Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:46.956 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 94906 00:21:46.956 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 94906 00:21:47.215 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:21:47.215 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:47.215 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:47.215 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:47.215 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:47.215 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94953 00:21:47.215 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:21:47.215 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94953 /var/tmp/bperf.sock 00:21:47.215 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 94953 ']' 00:21:47.215 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:47.215 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:47.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:47.215 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:47.215 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:47.215 18:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:47.215 [2024-12-08 18:39:05.049451] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:47.215 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:47.215 Zero copy mechanism will not be used. 00:21:47.215 [2024-12-08 18:39:05.049996] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94953 ] 00:21:47.474 [2024-12-08 18:39:05.185524] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.474 [2024-12-08 18:39:05.243449] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.474 [2024-12-08 18:39:05.312269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:48.410 18:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:48.410 18:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:48.410 18:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:48.410 18:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:48.410 18:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:48.410 18:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.410 18:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:48.410 18:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.410 18:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:48.410 18:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:48.669 nvme0n1 00:21:48.669 18:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:48.669 18:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.669 18:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:48.669 18:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.669 18:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:48.669 18:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:48.930 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:48.930 Zero copy mechanism will not be used. 00:21:48.930 Running I/O for 2 seconds... 00:21:48.930 [2024-12-08 18:39:06.692331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.930 [2024-12-08 18:39:06.692620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.930 [2024-12-08 18:39:06.692679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.930 [2024-12-08 18:39:06.697561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.930 [2024-12-08 18:39:06.697825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.930 [2024-12-08 18:39:06.697865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.930 [2024-12-08 18:39:06.702514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.930 [2024-12-08 18:39:06.702773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.930 [2024-12-08 18:39:06.702804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.930 [2024-12-08 18:39:06.708463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.930 [2024-12-08 18:39:06.708723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.930 [2024-12-08 18:39:06.708744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.930 [2024-12-08 18:39:06.714459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.930 [2024-12-08 18:39:06.714735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.930 [2024-12-08 18:39:06.714755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.930 [2024-12-08 18:39:06.720630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.930 [2024-12-08 18:39:06.720881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.930 [2024-12-08 18:39:06.720911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.930 [2024-12-08 18:39:06.726393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.930 [2024-12-08 18:39:06.726677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.930 [2024-12-08 18:39:06.726697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.930 [2024-12-08 18:39:06.731710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.930 [2024-12-08 18:39:06.732005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.930 [2024-12-08 18:39:06.732036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.930 [2024-12-08 18:39:06.736854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.930 [2024-12-08 18:39:06.737119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.930 [2024-12-08 18:39:06.737177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.930 [2024-12-08 18:39:06.741856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.930 [2024-12-08 18:39:06.742124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.930 [2024-12-08 18:39:06.742146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.930 [2024-12-08 18:39:06.746782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.930 [2024-12-08 18:39:06.747039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.930 [2024-12-08 18:39:06.747071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.930 [2024-12-08 18:39:06.751709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.930 [2024-12-08 18:39:06.751978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.930 [2024-12-08 18:39:06.752012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.930 [2024-12-08 18:39:06.756940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.930 [2024-12-08 18:39:06.757222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.930 [2024-12-08 18:39:06.757262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.930 [2024-12-08 18:39:06.762251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.930 [2024-12-08 18:39:06.762545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.930 [2024-12-08 18:39:06.762595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.930 [2024-12-08 18:39:06.767516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.930 [2024-12-08 18:39:06.767806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.930 [2024-12-08 18:39:06.767861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.930 [2024-12-08 18:39:06.772859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.930 [2024-12-08 18:39:06.773138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.930 [2024-12-08 18:39:06.773172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.930 [2024-12-08 18:39:06.778227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.930 [2024-12-08 18:39:06.778520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.930 [2024-12-08 18:39:06.778543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.930 [2024-12-08 18:39:06.783552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.930 [2024-12-08 18:39:06.783825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.930 [2024-12-08 18:39:06.783851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.930 [2024-12-08 18:39:06.788930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.930 [2024-12-08 18:39:06.789190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.930 [2024-12-08 18:39:06.789216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.930 [2024-12-08 18:39:06.794116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.930 [2024-12-08 18:39:06.794378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.931 [2024-12-08 18:39:06.794413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.931 [2024-12-08 18:39:06.799289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.931 [2024-12-08 18:39:06.801170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.931 [2024-12-08 18:39:06.801199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.931 [2024-12-08 18:39:06.806171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.931 [2024-12-08 18:39:06.806461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.931 [2024-12-08 18:39:06.806497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.931 [2024-12-08 18:39:06.811586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.931 [2024-12-08 18:39:06.811861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.931 [2024-12-08 18:39:06.811886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.931 [2024-12-08 18:39:06.816692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.931 [2024-12-08 18:39:06.816953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.931 [2024-12-08 18:39:06.816979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.931 [2024-12-08 18:39:06.821826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.931 [2024-12-08 18:39:06.822088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.931 [2024-12-08 18:39:06.822110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.931 [2024-12-08 18:39:06.826871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.931 [2024-12-08 18:39:06.827132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.931 [2024-12-08 18:39:06.827158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.931 [2024-12-08 18:39:06.832021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.931 [2024-12-08 18:39:06.832286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.931 [2024-12-08 18:39:06.832306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.931 [2024-12-08 18:39:06.837088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.931 [2024-12-08 18:39:06.837351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.931 [2024-12-08 18:39:06.837377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.931 [2024-12-08 18:39:06.842082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.931 [2024-12-08 18:39:06.842346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.931 [2024-12-08 18:39:06.842372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.931 [2024-12-08 18:39:06.847132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.931 [2024-12-08 18:39:06.847574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.931 [2024-12-08 18:39:06.847600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.931 [2024-12-08 18:39:06.852533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:48.931 [2024-12-08 18:39:06.852798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.931 [2024-12-08 18:39:06.852823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.931 [2024-12-08 18:39:06.857650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.192 [2024-12-08 18:39:06.857926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.192 [2024-12-08 18:39:06.857955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.192 [2024-12-08 18:39:06.863025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.192 [2024-12-08 18:39:06.863505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.192 [2024-12-08 18:39:06.863561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.192 [2024-12-08 18:39:06.868416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.192 [2024-12-08 18:39:06.868719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.192 [2024-12-08 18:39:06.868744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.192 [2024-12-08 18:39:06.873555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.192 [2024-12-08 18:39:06.873817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.192 [2024-12-08 18:39:06.873844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.192 [2024-12-08 18:39:06.878571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.192 [2024-12-08 18:39:06.878833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.192 [2024-12-08 18:39:06.878860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.192 [2024-12-08 18:39:06.883525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.192 [2024-12-08 18:39:06.883788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.192 [2024-12-08 18:39:06.883849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.192 [2024-12-08 18:39:06.888559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.192 [2024-12-08 18:39:06.888820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.192 [2024-12-08 18:39:06.888845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.192 [2024-12-08 18:39:06.893600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.192 [2024-12-08 18:39:06.893861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.192 [2024-12-08 18:39:06.893886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.192 [2024-12-08 18:39:06.898617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.192 [2024-12-08 18:39:06.898881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.192 [2024-12-08 18:39:06.898907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.192 [2024-12-08 18:39:06.903579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.192 [2024-12-08 18:39:06.903849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.192 [2024-12-08 18:39:06.903874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.192 [2024-12-08 18:39:06.908749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.192 [2024-12-08 18:39:06.909010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.192 [2024-12-08 18:39:06.909036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.192 [2024-12-08 18:39:06.913752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.192 [2024-12-08 18:39:06.914014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.192 [2024-12-08 18:39:06.914040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.192 [2024-12-08 18:39:06.918729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.192 [2024-12-08 18:39:06.919010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.192 [2024-12-08 18:39:06.919036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.192 [2024-12-08 18:39:06.923925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.192 [2024-12-08 18:39:06.924191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.192 [2024-12-08 18:39:06.924217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.192 [2024-12-08 18:39:06.929038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.192 [2024-12-08 18:39:06.929301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.192 [2024-12-08 18:39:06.929322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.192 [2024-12-08 18:39:06.934021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.192 [2024-12-08 18:39:06.934458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.192 [2024-12-08 18:39:06.934481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.192 [2024-12-08 18:39:06.939400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.192 [2024-12-08 18:39:06.939695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.192 [2024-12-08 18:39:06.939721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.192 [2024-12-08 18:39:06.944502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.192 [2024-12-08 18:39:06.944765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.192 [2024-12-08 18:39:06.944785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.192 [2024-12-08 18:39:06.949522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.192 [2024-12-08 18:39:06.949786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.192 [2024-12-08 18:39:06.949812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.192 [2024-12-08 18:39:06.954715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.192 [2024-12-08 18:39:06.954993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.192 [2024-12-08 18:39:06.955019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.192 [2024-12-08 18:39:06.959730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.192 [2024-12-08 18:39:06.960018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.192 [2024-12-08 18:39:06.960043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.192 [2024-12-08 18:39:06.964807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.192 [2024-12-08 18:39:06.965072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.192 [2024-12-08 18:39:06.965097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.192 [2024-12-08 18:39:06.969971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.193 [2024-12-08 18:39:06.970397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.193 [2024-12-08 18:39:06.970434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.193 [2024-12-08 18:39:06.975307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.193 [2024-12-08 18:39:06.975579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.193 [2024-12-08 18:39:06.975604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.193 [2024-12-08 18:39:06.980372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.193 [2024-12-08 18:39:06.980675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.193 [2024-12-08 18:39:06.980727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.193 [2024-12-08 18:39:06.985582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.193 [2024-12-08 18:39:06.985826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.193 [2024-12-08 18:39:06.985852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.193 [2024-12-08 18:39:06.990443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.193 [2024-12-08 18:39:06.990690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.193 [2024-12-08 18:39:06.990715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.193 [2024-12-08 18:39:06.995352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.193 [2024-12-08 18:39:06.995647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.193 [2024-12-08 18:39:06.995669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.193 [2024-12-08 18:39:07.000311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.193 [2024-12-08 18:39:07.000571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.193 [2024-12-08 18:39:07.000594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.193 [2024-12-08 18:39:07.005180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.193 [2024-12-08 18:39:07.005587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.193 [2024-12-08 18:39:07.005610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.193 [2024-12-08 18:39:07.010258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.193 [2024-12-08 18:39:07.010517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.193 [2024-12-08 18:39:07.010550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.193 [2024-12-08 18:39:07.015251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.193 [2024-12-08 18:39:07.015508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.193 [2024-12-08 18:39:07.015528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.193 [2024-12-08 18:39:07.020175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.193 [2024-12-08 18:39:07.020435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.193 [2024-12-08 18:39:07.020472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.193 [2024-12-08 18:39:07.025069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.193 [2024-12-08 18:39:07.025476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.193 [2024-12-08 18:39:07.025503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.193 [2024-12-08 18:39:07.030228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.193 [2024-12-08 18:39:07.030488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.193 [2024-12-08 18:39:07.030513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.193 [2024-12-08 18:39:07.035128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.193 [2024-12-08 18:39:07.035373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.193 [2024-12-08 18:39:07.035398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.193 [2024-12-08 18:39:07.040105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.193 [2024-12-08 18:39:07.040367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.193 [2024-12-08 18:39:07.040392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.193 [2024-12-08 18:39:07.044928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.193 [2024-12-08 18:39:07.045320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.193 [2024-12-08 18:39:07.045354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.193 [2024-12-08 18:39:07.050035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.193 [2024-12-08 18:39:07.050281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.193 [2024-12-08 18:39:07.050306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.193 [2024-12-08 18:39:07.054939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.193 [2024-12-08 18:39:07.055184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.193 [2024-12-08 18:39:07.055205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.193 [2024-12-08 18:39:07.059862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.193 [2024-12-08 18:39:07.060133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.193 [2024-12-08 18:39:07.060158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.193 [2024-12-08 18:39:07.064762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.193 [2024-12-08 18:39:07.065008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.193 [2024-12-08 18:39:07.065034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.193 [2024-12-08 18:39:07.069623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.193 [2024-12-08 18:39:07.069869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.193 [2024-12-08 18:39:07.069895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.193 [2024-12-08 18:39:07.074432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.193 [2024-12-08 18:39:07.074677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.193 [2024-12-08 18:39:07.074697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.193 [2024-12-08 18:39:07.079313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.193 [2024-12-08 18:39:07.079571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.193 [2024-12-08 18:39:07.079598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.193 [2024-12-08 18:39:07.084559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.193 [2024-12-08 18:39:07.084838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.193 [2024-12-08 18:39:07.084859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.193 [2024-12-08 18:39:07.089886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.193 [2024-12-08 18:39:07.090166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.193 [2024-12-08 18:39:07.090202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.193 [2024-12-08 18:39:07.095312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.193 [2024-12-08 18:39:07.095586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.193 [2024-12-08 18:39:07.095612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.193 [2024-12-08 18:39:07.101098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.193 [2024-12-08 18:39:07.101398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.193 [2024-12-08 18:39:07.101444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.193 [2024-12-08 18:39:07.106531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.194 [2024-12-08 18:39:07.106800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.194 [2024-12-08 18:39:07.106834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.194 [2024-12-08 18:39:07.112159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.194 [2024-12-08 18:39:07.112562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.194 [2024-12-08 18:39:07.112602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.194 [2024-12-08 18:39:07.117883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.194 [2024-12-08 18:39:07.118171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.194 [2024-12-08 18:39:07.118197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.454 [2024-12-08 18:39:07.123331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.455 [2024-12-08 18:39:07.123602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.455 [2024-12-08 18:39:07.123627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.455 [2024-12-08 18:39:07.128532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.455 [2024-12-08 18:39:07.128778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.455 [2024-12-08 18:39:07.128804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.455 [2024-12-08 18:39:07.133442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.455 [2024-12-08 18:39:07.133711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.455 [2024-12-08 18:39:07.133740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.455 [2024-12-08 18:39:07.138254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.455 [2024-12-08 18:39:07.138515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.455 [2024-12-08 18:39:07.138547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.455 [2024-12-08 18:39:07.143280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.455 [2024-12-08 18:39:07.143539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.455 [2024-12-08 18:39:07.143564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.455 [2024-12-08 18:39:07.148282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.455 [2024-12-08 18:39:07.148689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.455 [2024-12-08 18:39:07.148716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.455 [2024-12-08 18:39:07.153444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.455 [2024-12-08 18:39:07.153708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.455 [2024-12-08 18:39:07.153742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.455 [2024-12-08 18:39:07.158337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.455 [2024-12-08 18:39:07.158598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.455 [2024-12-08 18:39:07.158626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.455 [2024-12-08 18:39:07.163306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.455 [2024-12-08 18:39:07.163579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.455 [2024-12-08 18:39:07.163607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.455 [2024-12-08 18:39:07.168225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.455 [2024-12-08 18:39:07.168630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.455 [2024-12-08 18:39:07.168658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.455 [2024-12-08 18:39:07.173289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.455 [2024-12-08 18:39:07.173564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.455 [2024-12-08 18:39:07.173590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.455 [2024-12-08 18:39:07.178124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.455 [2024-12-08 18:39:07.178368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.455 [2024-12-08 18:39:07.178393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.455 [2024-12-08 18:39:07.183070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.455 [2024-12-08 18:39:07.183316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.455 [2024-12-08 18:39:07.183342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.455 [2024-12-08 18:39:07.187949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.455 [2024-12-08 18:39:07.188377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.455 [2024-12-08 18:39:07.188413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.455 [2024-12-08 18:39:07.193160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.455 [2024-12-08 18:39:07.193421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.455 [2024-12-08 18:39:07.193446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.455 [2024-12-08 18:39:07.197932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.455 [2024-12-08 18:39:07.198177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.455 [2024-12-08 18:39:07.198197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.455 [2024-12-08 18:39:07.202809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.455 [2024-12-08 18:39:07.203055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.455 [2024-12-08 18:39:07.203082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.455 [2024-12-08 18:39:07.207692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.455 [2024-12-08 18:39:07.207967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.455 [2024-12-08 18:39:07.207994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.455 [2024-12-08 18:39:07.212625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.455 [2024-12-08 18:39:07.212870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.455 [2024-12-08 18:39:07.212895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.455 [2024-12-08 18:39:07.217562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.455 [2024-12-08 18:39:07.217823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.455 [2024-12-08 18:39:07.217848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.455 [2024-12-08 18:39:07.222489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.455 [2024-12-08 18:39:07.222733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.455 [2024-12-08 18:39:07.222758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.455 [2024-12-08 18:39:07.227257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.455 [2024-12-08 18:39:07.227660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.455 [2024-12-08 18:39:07.227696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.455 [2024-12-08 18:39:07.232352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.455 [2024-12-08 18:39:07.232611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.455 [2024-12-08 18:39:07.232635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.455 [2024-12-08 18:39:07.237172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.455 [2024-12-08 18:39:07.237450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.455 [2024-12-08 18:39:07.237476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.455 [2024-12-08 18:39:07.242034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.455 [2024-12-08 18:39:07.242278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.455 [2024-12-08 18:39:07.242304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.455 [2024-12-08 18:39:07.246949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.455 [2024-12-08 18:39:07.247352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.455 [2024-12-08 18:39:07.247384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.455 [2024-12-08 18:39:07.252063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.456 [2024-12-08 18:39:07.252360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.456 [2024-12-08 18:39:07.252385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.456 [2024-12-08 18:39:07.256963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.456 [2024-12-08 18:39:07.257208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.456 [2024-12-08 18:39:07.257233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.456 [2024-12-08 18:39:07.261781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.456 [2024-12-08 18:39:07.262025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.456 [2024-12-08 18:39:07.262051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.456 [2024-12-08 18:39:07.266642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.456 [2024-12-08 18:39:07.266888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.456 [2024-12-08 18:39:07.266908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.456 [2024-12-08 18:39:07.271548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.456 [2024-12-08 18:39:07.271801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.456 [2024-12-08 18:39:07.271836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.456 [2024-12-08 18:39:07.276477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.456 [2024-12-08 18:39:07.276723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.456 [2024-12-08 18:39:07.276748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.456 [2024-12-08 18:39:07.281309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.456 [2024-12-08 18:39:07.281579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.456 [2024-12-08 18:39:07.281599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.456 [2024-12-08 18:39:07.286128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.456 [2024-12-08 18:39:07.286371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.456 [2024-12-08 18:39:07.286396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.456 [2024-12-08 18:39:07.291117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.456 [2024-12-08 18:39:07.291539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.456 [2024-12-08 18:39:07.291568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.456 [2024-12-08 18:39:07.296263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.456 [2024-12-08 18:39:07.296524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.456 [2024-12-08 18:39:07.296545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.456 [2024-12-08 18:39:07.301210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.456 [2024-12-08 18:39:07.301479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.456 [2024-12-08 18:39:07.301505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.456 [2024-12-08 18:39:07.306069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.456 [2024-12-08 18:39:07.306312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.456 [2024-12-08 18:39:07.306340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.456 [2024-12-08 18:39:07.311004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.456 [2024-12-08 18:39:07.311423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.456 [2024-12-08 18:39:07.311451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.456 [2024-12-08 18:39:07.316134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.456 [2024-12-08 18:39:07.316398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.456 [2024-12-08 18:39:07.316446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.456 [2024-12-08 18:39:07.321007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.456 [2024-12-08 18:39:07.321253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.456 [2024-12-08 18:39:07.321274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.456 [2024-12-08 18:39:07.325963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.456 [2024-12-08 18:39:07.326209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.456 [2024-12-08 18:39:07.326234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.456 [2024-12-08 18:39:07.330782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.456 [2024-12-08 18:39:07.331026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.456 [2024-12-08 18:39:07.331052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.456 [2024-12-08 18:39:07.335649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.456 [2024-12-08 18:39:07.335900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.456 [2024-12-08 18:39:07.335925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.456 [2024-12-08 18:39:07.340534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.456 [2024-12-08 18:39:07.340777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.456 [2024-12-08 18:39:07.340804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.456 [2024-12-08 18:39:07.345336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.456 [2024-12-08 18:39:07.345604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.456 [2024-12-08 18:39:07.345627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.456 [2024-12-08 18:39:07.350287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.456 [2024-12-08 18:39:07.350692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.456 [2024-12-08 18:39:07.350720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.456 [2024-12-08 18:39:07.355354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.456 [2024-12-08 18:39:07.355615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.456 [2024-12-08 18:39:07.355641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.456 [2024-12-08 18:39:07.360344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.456 [2024-12-08 18:39:07.360633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.456 [2024-12-08 18:39:07.360661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.456 [2024-12-08 18:39:07.365234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.456 [2024-12-08 18:39:07.365505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.456 [2024-12-08 18:39:07.365531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.456 [2024-12-08 18:39:07.370120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.456 [2024-12-08 18:39:07.370526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.456 [2024-12-08 18:39:07.370551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.456 [2024-12-08 18:39:07.375300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.456 [2024-12-08 18:39:07.375727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.456 [2024-12-08 18:39:07.376031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.456 [2024-12-08 18:39:07.381041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.456 [2024-12-08 18:39:07.381472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.717 [2024-12-08 18:39:07.381651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.717 [2024-12-08 18:39:07.386591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.717 [2024-12-08 18:39:07.386839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.717 [2024-12-08 18:39:07.386866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.717 [2024-12-08 18:39:07.391522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.717 [2024-12-08 18:39:07.391772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.717 [2024-12-08 18:39:07.391817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.717 [2024-12-08 18:39:07.396517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.717 [2024-12-08 18:39:07.396762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.717 [2024-12-08 18:39:07.396788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.717 [2024-12-08 18:39:07.401531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.717 [2024-12-08 18:39:07.401792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.717 [2024-12-08 18:39:07.401828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.717 [2024-12-08 18:39:07.406360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.717 [2024-12-08 18:39:07.406791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.717 [2024-12-08 18:39:07.406818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.717 [2024-12-08 18:39:07.411496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.717 [2024-12-08 18:39:07.411758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.717 [2024-12-08 18:39:07.411784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.717 [2024-12-08 18:39:07.416406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.717 [2024-12-08 18:39:07.416684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.718 [2024-12-08 18:39:07.416712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.718 [2024-12-08 18:39:07.421288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.718 [2024-12-08 18:39:07.421561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.718 [2024-12-08 18:39:07.421587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.718 [2024-12-08 18:39:07.426166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.718 [2024-12-08 18:39:07.426563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.718 [2024-12-08 18:39:07.426589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.718 [2024-12-08 18:39:07.431201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.718 [2024-12-08 18:39:07.431459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.718 [2024-12-08 18:39:07.431484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.718 [2024-12-08 18:39:07.436209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.718 [2024-12-08 18:39:07.436479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.718 [2024-12-08 18:39:07.436505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.718 [2024-12-08 18:39:07.441087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.718 [2024-12-08 18:39:07.441330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.718 [2024-12-08 18:39:07.441356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.718 [2024-12-08 18:39:07.445953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.718 [2024-12-08 18:39:07.446372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.718 [2024-12-08 18:39:07.446413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.718 [2024-12-08 18:39:07.451086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.718 [2024-12-08 18:39:07.451333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.718 [2024-12-08 18:39:07.451354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.718 [2024-12-08 18:39:07.456003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.718 [2024-12-08 18:39:07.456269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.718 [2024-12-08 18:39:07.456295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.718 [2024-12-08 18:39:07.460995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.718 [2024-12-08 18:39:07.461241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.718 [2024-12-08 18:39:07.461266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.718 [2024-12-08 18:39:07.465882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.718 [2024-12-08 18:39:07.466129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.718 [2024-12-08 18:39:07.466155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.718 [2024-12-08 18:39:07.470731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.718 [2024-12-08 18:39:07.470978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.718 [2024-12-08 18:39:07.471004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.718 [2024-12-08 18:39:07.475583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.718 [2024-12-08 18:39:07.475839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.718 [2024-12-08 18:39:07.475863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.718 [2024-12-08 18:39:07.480441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.718 [2024-12-08 18:39:07.480687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.718 [2024-12-08 18:39:07.480712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.718 [2024-12-08 18:39:07.485235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.718 [2024-12-08 18:39:07.485492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.718 [2024-12-08 18:39:07.485519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.718 [2024-12-08 18:39:07.490102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.718 [2024-12-08 18:39:07.490507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.718 [2024-12-08 18:39:07.490535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.718 [2024-12-08 18:39:07.495177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.718 [2024-12-08 18:39:07.495435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.718 [2024-12-08 18:39:07.495456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.718 [2024-12-08 18:39:07.500066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.718 [2024-12-08 18:39:07.500360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.718 [2024-12-08 18:39:07.500386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.718 [2024-12-08 18:39:07.505006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.718 [2024-12-08 18:39:07.505251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.718 [2024-12-08 18:39:07.505276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.718 [2024-12-08 18:39:07.509871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.718 [2024-12-08 18:39:07.510116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.718 [2024-12-08 18:39:07.510142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.718 [2024-12-08 18:39:07.514785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.718 [2024-12-08 18:39:07.515035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.718 [2024-12-08 18:39:07.515060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.718 [2024-12-08 18:39:07.519677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.718 [2024-12-08 18:39:07.519930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.718 [2024-12-08 18:39:07.519956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.718 [2024-12-08 18:39:07.524568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.718 [2024-12-08 18:39:07.524814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.718 [2024-12-08 18:39:07.524840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.718 [2024-12-08 18:39:07.529526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.718 [2024-12-08 18:39:07.529789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.718 [2024-12-08 18:39:07.529822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.718 [2024-12-08 18:39:07.534474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.718 [2024-12-08 18:39:07.534719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.718 [2024-12-08 18:39:07.534744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.718 [2024-12-08 18:39:07.539363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.718 [2024-12-08 18:39:07.539641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.718 [2024-12-08 18:39:07.539662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.718 [2024-12-08 18:39:07.544260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.718 [2024-12-08 18:39:07.544516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.718 [2024-12-08 18:39:07.544542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.718 [2024-12-08 18:39:07.549175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.718 [2024-12-08 18:39:07.549581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.719 [2024-12-08 18:39:07.549615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.719 [2024-12-08 18:39:07.554228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.719 [2024-12-08 18:39:07.554489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.719 [2024-12-08 18:39:07.554511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.719 [2024-12-08 18:39:07.559107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.719 [2024-12-08 18:39:07.559351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.719 [2024-12-08 18:39:07.559378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.719 [2024-12-08 18:39:07.564235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.719 [2024-12-08 18:39:07.564509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.719 [2024-12-08 18:39:07.564534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.719 [2024-12-08 18:39:07.569124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.719 [2024-12-08 18:39:07.569526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.719 [2024-12-08 18:39:07.569563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.719 [2024-12-08 18:39:07.574186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.719 [2024-12-08 18:39:07.574450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.719 [2024-12-08 18:39:07.574475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.719 [2024-12-08 18:39:07.579042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.719 [2024-12-08 18:39:07.579293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.719 [2024-12-08 18:39:07.579319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.719 [2024-12-08 18:39:07.583942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.719 [2024-12-08 18:39:07.584210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.719 [2024-12-08 18:39:07.584235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.719 [2024-12-08 18:39:07.588911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.719 [2024-12-08 18:39:07.589351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.719 [2024-12-08 18:39:07.589378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.719 [2024-12-08 18:39:07.594010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.719 [2024-12-08 18:39:07.594258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.719 [2024-12-08 18:39:07.594284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.719 [2024-12-08 18:39:07.598838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.719 [2024-12-08 18:39:07.599085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.719 [2024-12-08 18:39:07.599110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.719 [2024-12-08 18:39:07.603718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.719 [2024-12-08 18:39:07.603975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.719 [2024-12-08 18:39:07.603995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.719 [2024-12-08 18:39:07.608642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.719 [2024-12-08 18:39:07.608888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.719 [2024-12-08 18:39:07.608914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.719 [2024-12-08 18:39:07.613570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.719 [2024-12-08 18:39:07.613833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.719 [2024-12-08 18:39:07.613858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.719 [2024-12-08 18:39:07.618478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.719 [2024-12-08 18:39:07.618723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.719 [2024-12-08 18:39:07.618748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.719 [2024-12-08 18:39:07.623339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.719 [2024-12-08 18:39:07.623596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.719 [2024-12-08 18:39:07.623623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.719 [2024-12-08 18:39:07.628446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.719 [2024-12-08 18:39:07.628892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.719 [2024-12-08 18:39:07.628919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.719 [2024-12-08 18:39:07.633580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.719 [2024-12-08 18:39:07.633842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.719 [2024-12-08 18:39:07.633867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.719 [2024-12-08 18:39:07.638512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.719 [2024-12-08 18:39:07.638757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.719 [2024-12-08 18:39:07.638782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.719 [2024-12-08 18:39:07.643555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.719 [2024-12-08 18:39:07.643823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.719 [2024-12-08 18:39:07.643879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.980 [2024-12-08 18:39:07.648703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.980 [2024-12-08 18:39:07.648949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.980 [2024-12-08 18:39:07.648974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.980 [2024-12-08 18:39:07.653709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.980 [2024-12-08 18:39:07.653974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.980 [2024-12-08 18:39:07.653995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.980 [2024-12-08 18:39:07.658522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.980 [2024-12-08 18:39:07.658770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.980 [2024-12-08 18:39:07.658796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.980 [2024-12-08 18:39:07.663492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.980 [2024-12-08 18:39:07.663738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.980 [2024-12-08 18:39:07.663764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.980 [2024-12-08 18:39:07.668393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.980 [2024-12-08 18:39:07.668670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.980 [2024-12-08 18:39:07.668699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.980 [2024-12-08 18:39:07.673338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.980 [2024-12-08 18:39:07.673608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.980 [2024-12-08 18:39:07.673638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.980 [2024-12-08 18:39:07.678187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.980 [2024-12-08 18:39:07.678447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.980 [2024-12-08 18:39:07.678468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.980 6076.00 IOPS, 759.50 MiB/s [2024-12-08T18:39:07.910Z] [2024-12-08 18:39:07.684076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.980 [2024-12-08 18:39:07.684327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.980 [2024-12-08 18:39:07.684356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.980 [2024-12-08 18:39:07.688976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.981 [2024-12-08 18:39:07.689243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.981 [2024-12-08 18:39:07.689269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.981 [2024-12-08 18:39:07.693853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.981 [2024-12-08 18:39:07.694099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.981 [2024-12-08 18:39:07.694119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.981 [2024-12-08 18:39:07.698774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.981 [2024-12-08 18:39:07.699015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.981 [2024-12-08 18:39:07.699042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.981 [2024-12-08 18:39:07.703665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.981 [2024-12-08 18:39:07.703935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.981 [2024-12-08 18:39:07.703956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.981 [2024-12-08 18:39:07.708600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.981 [2024-12-08 18:39:07.708846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.981 [2024-12-08 18:39:07.708872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.981 [2024-12-08 18:39:07.713513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.981 [2024-12-08 18:39:07.713776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.981 [2024-12-08 18:39:07.713801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.981 [2024-12-08 18:39:07.718434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.981 [2024-12-08 18:39:07.718679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.981 [2024-12-08 18:39:07.718704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.981 [2024-12-08 18:39:07.723308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.981 [2024-12-08 18:39:07.723568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.981 [2024-12-08 18:39:07.723593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.981 [2024-12-08 18:39:07.728222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.981 [2024-12-08 18:39:07.728478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.981 [2024-12-08 18:39:07.728504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.981 [2024-12-08 18:39:07.733147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.981 [2024-12-08 18:39:07.733392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.981 [2024-12-08 18:39:07.733436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.981 [2024-12-08 18:39:07.737960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.981 [2024-12-08 18:39:07.738354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.981 [2024-12-08 18:39:07.738380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.981 [2024-12-08 18:39:07.743077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.981 [2024-12-08 18:39:07.743332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.981 [2024-12-08 18:39:07.743357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.981 [2024-12-08 18:39:07.748033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.981 [2024-12-08 18:39:07.748296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.981 [2024-12-08 18:39:07.748316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.981 [2024-12-08 18:39:07.752999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.981 [2024-12-08 18:39:07.753245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.981 [2024-12-08 18:39:07.753266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.981 [2024-12-08 18:39:07.757928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.981 [2024-12-08 18:39:07.758176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.981 [2024-12-08 18:39:07.758210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.981 [2024-12-08 18:39:07.762775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.981 [2024-12-08 18:39:07.763038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.981 [2024-12-08 18:39:07.763064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.981 [2024-12-08 18:39:07.767641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.981 [2024-12-08 18:39:07.767961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.981 [2024-12-08 18:39:07.767987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.981 [2024-12-08 18:39:07.772703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.981 [2024-12-08 18:39:07.772948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.981 [2024-12-08 18:39:07.772968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.981 [2024-12-08 18:39:07.777557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.981 [2024-12-08 18:39:07.777817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.981 [2024-12-08 18:39:07.777843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.981 [2024-12-08 18:39:07.782429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.981 [2024-12-08 18:39:07.782673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.981 [2024-12-08 18:39:07.782693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.981 [2024-12-08 18:39:07.787197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.981 [2024-12-08 18:39:07.787455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.981 [2024-12-08 18:39:07.787481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.981 [2024-12-08 18:39:07.792058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.981 [2024-12-08 18:39:07.792338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.981 [2024-12-08 18:39:07.792365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.981 [2024-12-08 18:39:07.797015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.981 [2024-12-08 18:39:07.797261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.981 [2024-12-08 18:39:07.797286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.981 [2024-12-08 18:39:07.801867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.981 [2024-12-08 18:39:07.802114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.981 [2024-12-08 18:39:07.802135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.981 [2024-12-08 18:39:07.806724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.981 [2024-12-08 18:39:07.806972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.981 [2024-12-08 18:39:07.806998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.981 [2024-12-08 18:39:07.811533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.981 [2024-12-08 18:39:07.811779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.981 [2024-12-08 18:39:07.811822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.981 [2024-12-08 18:39:07.816505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.981 [2024-12-08 18:39:07.816748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.981 [2024-12-08 18:39:07.816774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.982 [2024-12-08 18:39:07.821398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.982 [2024-12-08 18:39:07.821674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.982 [2024-12-08 18:39:07.821734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.982 [2024-12-08 18:39:07.826232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.982 [2024-12-08 18:39:07.826648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.982 [2024-12-08 18:39:07.826676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.982 [2024-12-08 18:39:07.831351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.982 [2024-12-08 18:39:07.831632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.982 [2024-12-08 18:39:07.831657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.982 [2024-12-08 18:39:07.836241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.982 [2024-12-08 18:39:07.836502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.982 [2024-12-08 18:39:07.836535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.982 [2024-12-08 18:39:07.841117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.982 [2024-12-08 18:39:07.841364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.982 [2024-12-08 18:39:07.841389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.982 [2024-12-08 18:39:07.846070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.982 [2024-12-08 18:39:07.846483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.982 [2024-12-08 18:39:07.846511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.982 [2024-12-08 18:39:07.851147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.982 [2024-12-08 18:39:07.851393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.982 [2024-12-08 18:39:07.851430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.982 [2024-12-08 18:39:07.856030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.982 [2024-12-08 18:39:07.856294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.982 [2024-12-08 18:39:07.856321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.982 [2024-12-08 18:39:07.860977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.982 [2024-12-08 18:39:07.861222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.982 [2024-12-08 18:39:07.861249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.982 [2024-12-08 18:39:07.865882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.982 [2024-12-08 18:39:07.866277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.982 [2024-12-08 18:39:07.866306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.982 [2024-12-08 18:39:07.870985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.982 [2024-12-08 18:39:07.871232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.982 [2024-12-08 18:39:07.871258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.982 [2024-12-08 18:39:07.875843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.982 [2024-12-08 18:39:07.876106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.982 [2024-12-08 18:39:07.876144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.982 [2024-12-08 18:39:07.880799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.982 [2024-12-08 18:39:07.881042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.982 [2024-12-08 18:39:07.881068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.982 [2024-12-08 18:39:07.885665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.982 [2024-12-08 18:39:07.885927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.982 [2024-12-08 18:39:07.885953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.982 [2024-12-08 18:39:07.890564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.982 [2024-12-08 18:39:07.890808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.982 [2024-12-08 18:39:07.890834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.982 [2024-12-08 18:39:07.895443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.982 [2024-12-08 18:39:07.895687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.982 [2024-12-08 18:39:07.895712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.982 [2024-12-08 18:39:07.900309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.982 [2024-12-08 18:39:07.900568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.982 [2024-12-08 18:39:07.900589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.982 [2024-12-08 18:39:07.905291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:49.982 [2024-12-08 18:39:07.905784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.982 [2024-12-08 18:39:07.905859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.243 [2024-12-08 18:39:07.910639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.243 [2024-12-08 18:39:07.910902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.243 [2024-12-08 18:39:07.910928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.243 [2024-12-08 18:39:07.915684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.243 [2024-12-08 18:39:07.915979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.243 [2024-12-08 18:39:07.916005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.243 [2024-12-08 18:39:07.920686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.243 [2024-12-08 18:39:07.920946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.243 [2024-12-08 18:39:07.920972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.243 [2024-12-08 18:39:07.925562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.243 [2024-12-08 18:39:07.925806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.243 [2024-12-08 18:39:07.925832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.243 [2024-12-08 18:39:07.930379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.243 [2024-12-08 18:39:07.930672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.243 [2024-12-08 18:39:07.930702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.243 [2024-12-08 18:39:07.935378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.243 [2024-12-08 18:39:07.935632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.243 [2024-12-08 18:39:07.935658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.243 [2024-12-08 18:39:07.940192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.243 [2024-12-08 18:39:07.940591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.243 [2024-12-08 18:39:07.940619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.243 [2024-12-08 18:39:07.945246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.243 [2024-12-08 18:39:07.945504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.243 [2024-12-08 18:39:07.945529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.243 [2024-12-08 18:39:07.950169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.243 [2024-12-08 18:39:07.950439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.243 [2024-12-08 18:39:07.950464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.243 [2024-12-08 18:39:07.955078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.243 [2024-12-08 18:39:07.955325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.243 [2024-12-08 18:39:07.955351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.243 [2024-12-08 18:39:07.960080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.243 [2024-12-08 18:39:07.960531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.243 [2024-12-08 18:39:07.960560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.243 [2024-12-08 18:39:07.965145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.243 [2024-12-08 18:39:07.965392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.243 [2024-12-08 18:39:07.965429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.243 [2024-12-08 18:39:07.970041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.243 [2024-12-08 18:39:07.970285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.243 [2024-12-08 18:39:07.970310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.243 [2024-12-08 18:39:07.975320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.243 [2024-12-08 18:39:07.975607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.243 [2024-12-08 18:39:07.975634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.243 [2024-12-08 18:39:07.980651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.243 [2024-12-08 18:39:07.980915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.243 [2024-12-08 18:39:07.980941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.243 [2024-12-08 18:39:07.985829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.243 [2024-12-08 18:39:07.986091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.243 [2024-12-08 18:39:07.986117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.243 [2024-12-08 18:39:07.991513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.243 [2024-12-08 18:39:07.991777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.243 [2024-12-08 18:39:07.991810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.243 [2024-12-08 18:39:07.997058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.243 [2024-12-08 18:39:07.997345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.243 [2024-12-08 18:39:07.997371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.243 [2024-12-08 18:39:08.002426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.243 [2024-12-08 18:39:08.002736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.243 [2024-12-08 18:39:08.002762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.244 [2024-12-08 18:39:08.007767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.244 [2024-12-08 18:39:08.008056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.244 [2024-12-08 18:39:08.008081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.244 [2024-12-08 18:39:08.013092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.244 [2024-12-08 18:39:08.013353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.244 [2024-12-08 18:39:08.013378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.244 [2024-12-08 18:39:08.018259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.244 [2024-12-08 18:39:08.018536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.244 [2024-12-08 18:39:08.018572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.244 [2024-12-08 18:39:08.023303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.244 [2024-12-08 18:39:08.023737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.244 [2024-12-08 18:39:08.023764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.244 [2024-12-08 18:39:08.028544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.244 [2024-12-08 18:39:08.028814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.244 [2024-12-08 18:39:08.028839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.244 [2024-12-08 18:39:08.033483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.244 [2024-12-08 18:39:08.033745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.244 [2024-12-08 18:39:08.033770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.244 [2024-12-08 18:39:08.038395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.244 [2024-12-08 18:39:08.038666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.244 [2024-12-08 18:39:08.038687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.244 [2024-12-08 18:39:08.043505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.244 [2024-12-08 18:39:08.043766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.244 [2024-12-08 18:39:08.043798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.244 [2024-12-08 18:39:08.048520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.244 [2024-12-08 18:39:08.048783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.244 [2024-12-08 18:39:08.048820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.244 [2024-12-08 18:39:08.053472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.244 [2024-12-08 18:39:08.053734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.244 [2024-12-08 18:39:08.053759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.244 [2024-12-08 18:39:08.058492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.244 [2024-12-08 18:39:08.058752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.244 [2024-12-08 18:39:08.058777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.244 [2024-12-08 18:39:08.063668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.244 [2024-12-08 18:39:08.063937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.244 [2024-12-08 18:39:08.063963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.244 [2024-12-08 18:39:08.068774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.244 [2024-12-08 18:39:08.069053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.244 [2024-12-08 18:39:08.069078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.244 [2024-12-08 18:39:08.073771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.244 [2024-12-08 18:39:08.074034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.244 [2024-12-08 18:39:08.074059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.244 [2024-12-08 18:39:08.078760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.244 [2024-12-08 18:39:08.079023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.244 [2024-12-08 18:39:08.079048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.244 [2024-12-08 18:39:08.083767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.244 [2024-12-08 18:39:08.084055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.244 [2024-12-08 18:39:08.084080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.244 [2024-12-08 18:39:08.088742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.244 [2024-12-08 18:39:08.089018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.244 [2024-12-08 18:39:08.089045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.244 [2024-12-08 18:39:08.093832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.244 [2024-12-08 18:39:08.094092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.244 [2024-12-08 18:39:08.094118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.244 [2024-12-08 18:39:08.098850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.244 [2024-12-08 18:39:08.099110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.244 [2024-12-08 18:39:08.099136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.244 [2024-12-08 18:39:08.104272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.244 [2024-12-08 18:39:08.104759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.244 [2024-12-08 18:39:08.104781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.244 [2024-12-08 18:39:08.109951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.244 [2024-12-08 18:39:08.110215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.244 [2024-12-08 18:39:08.110241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.244 [2024-12-08 18:39:08.115713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.244 [2024-12-08 18:39:08.115989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.244 [2024-12-08 18:39:08.116014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.244 [2024-12-08 18:39:08.121580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.244 [2024-12-08 18:39:08.121832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.244 [2024-12-08 18:39:08.121857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.244 [2024-12-08 18:39:08.127279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.244 [2024-12-08 18:39:08.127712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.244 [2024-12-08 18:39:08.127740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.244 [2024-12-08 18:39:08.132991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.244 [2024-12-08 18:39:08.133254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.244 [2024-12-08 18:39:08.133279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.244 [2024-12-08 18:39:08.138545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.244 [2024-12-08 18:39:08.138820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.244 [2024-12-08 18:39:08.138846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.244 [2024-12-08 18:39:08.143959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.244 [2024-12-08 18:39:08.144228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.244 [2024-12-08 18:39:08.144253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.245 [2024-12-08 18:39:08.149358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.245 [2024-12-08 18:39:08.149677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.245 [2024-12-08 18:39:08.149702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.245 [2024-12-08 18:39:08.154877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.245 [2024-12-08 18:39:08.155297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.245 [2024-12-08 18:39:08.155324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.245 [2024-12-08 18:39:08.160352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.245 [2024-12-08 18:39:08.160647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.245 [2024-12-08 18:39:08.160674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.245 [2024-12-08 18:39:08.165634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.245 [2024-12-08 18:39:08.165943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.245 [2024-12-08 18:39:08.165971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.511 [2024-12-08 18:39:08.170937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.511 [2024-12-08 18:39:08.171240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.511 [2024-12-08 18:39:08.171267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.511 [2024-12-08 18:39:08.176210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.511 [2024-12-08 18:39:08.176512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.511 [2024-12-08 18:39:08.176550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.511 [2024-12-08 18:39:08.181462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.511 [2024-12-08 18:39:08.181724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.511 [2024-12-08 18:39:08.181751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.511 [2024-12-08 18:39:08.186686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.511 [2024-12-08 18:39:08.186949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.511 [2024-12-08 18:39:08.186975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.511 [2024-12-08 18:39:08.191655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.511 [2024-12-08 18:39:08.191946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.511 [2024-12-08 18:39:08.191972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.511 [2024-12-08 18:39:08.196745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.511 [2024-12-08 18:39:08.197007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.511 [2024-12-08 18:39:08.197033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.511 [2024-12-08 18:39:08.201718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.511 [2024-12-08 18:39:08.201963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.511 [2024-12-08 18:39:08.201983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.511 [2024-12-08 18:39:08.206650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.511 [2024-12-08 18:39:08.206896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.511 [2024-12-08 18:39:08.206917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.511 [2024-12-08 18:39:08.211451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.511 [2024-12-08 18:39:08.211698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.511 [2024-12-08 18:39:08.211723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.511 [2024-12-08 18:39:08.216387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.511 [2024-12-08 18:39:08.216830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.511 [2024-12-08 18:39:08.216867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.511 [2024-12-08 18:39:08.221537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.511 [2024-12-08 18:39:08.221782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.511 [2024-12-08 18:39:08.221807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.511 [2024-12-08 18:39:08.226367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.511 [2024-12-08 18:39:08.226653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.511 [2024-12-08 18:39:08.226729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.511 [2024-12-08 18:39:08.231482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.511 [2024-12-08 18:39:08.231727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.511 [2024-12-08 18:39:08.231753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.511 [2024-12-08 18:39:08.236388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.511 [2024-12-08 18:39:08.236829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.511 [2024-12-08 18:39:08.236867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.511 [2024-12-08 18:39:08.241513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.511 [2024-12-08 18:39:08.241758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.511 [2024-12-08 18:39:08.241783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.511 [2024-12-08 18:39:08.246553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.511 [2024-12-08 18:39:08.246798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.511 [2024-12-08 18:39:08.246823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.511 [2024-12-08 18:39:08.251374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.511 [2024-12-08 18:39:08.251658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.511 [2024-12-08 18:39:08.251696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.511 [2024-12-08 18:39:08.256381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.511 [2024-12-08 18:39:08.256820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.511 [2024-12-08 18:39:08.256842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.511 [2024-12-08 18:39:08.261602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.511 [2024-12-08 18:39:08.262006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.511 [2024-12-08 18:39:08.262180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.511 [2024-12-08 18:39:08.267123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.511 [2024-12-08 18:39:08.267567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.511 [2024-12-08 18:39:08.267730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.511 [2024-12-08 18:39:08.272662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.511 [2024-12-08 18:39:08.273111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.511 [2024-12-08 18:39:08.273269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.511 [2024-12-08 18:39:08.278038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.511 [2024-12-08 18:39:08.278468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.511 [2024-12-08 18:39:08.278642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.511 [2024-12-08 18:39:08.283481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.511 [2024-12-08 18:39:08.283945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.511 [2024-12-08 18:39:08.284137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.511 [2024-12-08 18:39:08.289177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.511 [2024-12-08 18:39:08.289628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.511 [2024-12-08 18:39:08.289784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.511 [2024-12-08 18:39:08.294571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.511 [2024-12-08 18:39:08.294985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.511 [2024-12-08 18:39:08.295160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.511 [2024-12-08 18:39:08.299950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.511 [2024-12-08 18:39:08.300405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.511 [2024-12-08 18:39:08.300466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.512 [2024-12-08 18:39:08.305030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.512 [2024-12-08 18:39:08.305276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.512 [2024-12-08 18:39:08.305302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.512 [2024-12-08 18:39:08.309874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.512 [2024-12-08 18:39:08.310135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.512 [2024-12-08 18:39:08.310161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.512 [2024-12-08 18:39:08.314733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.512 [2024-12-08 18:39:08.314978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.512 [2024-12-08 18:39:08.315003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.512 [2024-12-08 18:39:08.319643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.512 [2024-12-08 18:39:08.319917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.512 [2024-12-08 18:39:08.319944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.512 [2024-12-08 18:39:08.324534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.512 [2024-12-08 18:39:08.324796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.512 [2024-12-08 18:39:08.324835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.512 [2024-12-08 18:39:08.329385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.512 [2024-12-08 18:39:08.329667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.512 [2024-12-08 18:39:08.329704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.512 [2024-12-08 18:39:08.334355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.512 [2024-12-08 18:39:08.334617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.512 [2024-12-08 18:39:08.334642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.512 [2024-12-08 18:39:08.339188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.512 [2024-12-08 18:39:08.339588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.512 [2024-12-08 18:39:08.339616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.512 [2024-12-08 18:39:08.344234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.512 [2024-12-08 18:39:08.344505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.512 [2024-12-08 18:39:08.344528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.512 [2024-12-08 18:39:08.349047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.512 [2024-12-08 18:39:08.349293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.512 [2024-12-08 18:39:08.349319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.512 [2024-12-08 18:39:08.353858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.512 [2024-12-08 18:39:08.354103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.512 [2024-12-08 18:39:08.354129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.512 [2024-12-08 18:39:08.358642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.512 [2024-12-08 18:39:08.358887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.512 [2024-12-08 18:39:08.358912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.512 [2024-12-08 18:39:08.363381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.512 [2024-12-08 18:39:08.363678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.512 [2024-12-08 18:39:08.363713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.512 [2024-12-08 18:39:08.368292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.512 [2024-12-08 18:39:08.368549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.512 [2024-12-08 18:39:08.368575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.512 [2024-12-08 18:39:08.373151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.512 [2024-12-08 18:39:08.373398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.512 [2024-12-08 18:39:08.373444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.512 [2024-12-08 18:39:08.377985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.512 [2024-12-08 18:39:08.378375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.512 [2024-12-08 18:39:08.378414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.512 [2024-12-08 18:39:08.383017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.512 [2024-12-08 18:39:08.383266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.512 [2024-12-08 18:39:08.383291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.512 [2024-12-08 18:39:08.387927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.512 [2024-12-08 18:39:08.388177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.512 [2024-12-08 18:39:08.388203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.512 [2024-12-08 18:39:08.392855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.512 [2024-12-08 18:39:08.393099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.512 [2024-12-08 18:39:08.393120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.512 [2024-12-08 18:39:08.397717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.512 [2024-12-08 18:39:08.397977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.512 [2024-12-08 18:39:08.398003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.512 [2024-12-08 18:39:08.402511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.512 [2024-12-08 18:39:08.402756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.512 [2024-12-08 18:39:08.402782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.512 [2024-12-08 18:39:08.407394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.512 [2024-12-08 18:39:08.407670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.512 [2024-12-08 18:39:08.407694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.512 [2024-12-08 18:39:08.412384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.512 [2024-12-08 18:39:08.412677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.512 [2024-12-08 18:39:08.412710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.512 [2024-12-08 18:39:08.417433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.512 [2024-12-08 18:39:08.417695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.512 [2024-12-08 18:39:08.417751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.512 [2024-12-08 18:39:08.422281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.512 [2024-12-08 18:39:08.422539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.512 [2024-12-08 18:39:08.422562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.512 [2024-12-08 18:39:08.427209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.512 [2024-12-08 18:39:08.427466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.512 [2024-12-08 18:39:08.427487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.512 [2024-12-08 18:39:08.432258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.512 [2024-12-08 18:39:08.432671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.512 [2024-12-08 18:39:08.432703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.811 [2024-12-08 18:39:08.437307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.811 [2024-12-08 18:39:08.437538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.811 [2024-12-08 18:39:08.437589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.811 [2024-12-08 18:39:08.442794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.811 [2024-12-08 18:39:08.443032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.811 [2024-12-08 18:39:08.443250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.811 [2024-12-08 18:39:08.448409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.811 [2024-12-08 18:39:08.448686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.811 [2024-12-08 18:39:08.448970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.811 [2024-12-08 18:39:08.454001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.811 [2024-12-08 18:39:08.454253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.811 [2024-12-08 18:39:08.454543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.811 [2024-12-08 18:39:08.459335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.811 [2024-12-08 18:39:08.459609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.811 [2024-12-08 18:39:08.459889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.811 [2024-12-08 18:39:08.464857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.811 [2024-12-08 18:39:08.465094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.811 [2024-12-08 18:39:08.465280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.811 [2024-12-08 18:39:08.469965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.811 [2024-12-08 18:39:08.470217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.811 [2024-12-08 18:39:08.470480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.811 [2024-12-08 18:39:08.475292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.811 [2024-12-08 18:39:08.475560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.811 [2024-12-08 18:39:08.475770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.811 [2024-12-08 18:39:08.480605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.811 [2024-12-08 18:39:08.480688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.811 [2024-12-08 18:39:08.480709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.811 [2024-12-08 18:39:08.485443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.811 [2024-12-08 18:39:08.485522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.811 [2024-12-08 18:39:08.485544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.811 [2024-12-08 18:39:08.490306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.811 [2024-12-08 18:39:08.490370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.811 [2024-12-08 18:39:08.490391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.811 [2024-12-08 18:39:08.495181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.811 [2024-12-08 18:39:08.495245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.811 [2024-12-08 18:39:08.495265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.811 [2024-12-08 18:39:08.500135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.811 [2024-12-08 18:39:08.500341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.811 [2024-12-08 18:39:08.500362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.811 [2024-12-08 18:39:08.505207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.811 [2024-12-08 18:39:08.505272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.811 [2024-12-08 18:39:08.505293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.811 [2024-12-08 18:39:08.510052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.811 [2024-12-08 18:39:08.510118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.811 [2024-12-08 18:39:08.510138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.811 [2024-12-08 18:39:08.514879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.811 [2024-12-08 18:39:08.514943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.811 [2024-12-08 18:39:08.514964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.811 [2024-12-08 18:39:08.519686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.811 [2024-12-08 18:39:08.519751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.811 [2024-12-08 18:39:08.519772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.811 [2024-12-08 18:39:08.524534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.811 [2024-12-08 18:39:08.524599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.811 [2024-12-08 18:39:08.524619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.811 [2024-12-08 18:39:08.529377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.811 [2024-12-08 18:39:08.529490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.811 [2024-12-08 18:39:08.529510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.811 [2024-12-08 18:39:08.534249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.811 [2024-12-08 18:39:08.534310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.811 [2024-12-08 18:39:08.534330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.811 [2024-12-08 18:39:08.539102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.811 [2024-12-08 18:39:08.539166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.811 [2024-12-08 18:39:08.539187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.811 [2024-12-08 18:39:08.543902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.811 [2024-12-08 18:39:08.543984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.811 [2024-12-08 18:39:08.544004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.811 [2024-12-08 18:39:08.548834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.811 [2024-12-08 18:39:08.548898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.811 [2024-12-08 18:39:08.548919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.812 [2024-12-08 18:39:08.553787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.812 [2024-12-08 18:39:08.553850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.812 [2024-12-08 18:39:08.553871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.812 [2024-12-08 18:39:08.558647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.812 [2024-12-08 18:39:08.558712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.812 [2024-12-08 18:39:08.558732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.812 [2024-12-08 18:39:08.563506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.812 [2024-12-08 18:39:08.563604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.812 [2024-12-08 18:39:08.563636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.812 [2024-12-08 18:39:08.568421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.812 [2024-12-08 18:39:08.568515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.812 [2024-12-08 18:39:08.568535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.812 [2024-12-08 18:39:08.573277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.812 [2024-12-08 18:39:08.573342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.812 [2024-12-08 18:39:08.573362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.812 [2024-12-08 18:39:08.578093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.812 [2024-12-08 18:39:08.578158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.812 [2024-12-08 18:39:08.578178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.812 [2024-12-08 18:39:08.582973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.812 [2024-12-08 18:39:08.583041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.812 [2024-12-08 18:39:08.583060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.812 [2024-12-08 18:39:08.587790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.812 [2024-12-08 18:39:08.587897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.812 [2024-12-08 18:39:08.587918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.812 [2024-12-08 18:39:08.592643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.812 [2024-12-08 18:39:08.592708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.812 [2024-12-08 18:39:08.592728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.812 [2024-12-08 18:39:08.597386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.812 [2024-12-08 18:39:08.597502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.812 [2024-12-08 18:39:08.597522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.812 [2024-12-08 18:39:08.602220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.812 [2024-12-08 18:39:08.602284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.812 [2024-12-08 18:39:08.602304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.812 [2024-12-08 18:39:08.607072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.812 [2024-12-08 18:39:08.607298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.812 [2024-12-08 18:39:08.607317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.812 [2024-12-08 18:39:08.612180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.812 [2024-12-08 18:39:08.612246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.812 [2024-12-08 18:39:08.612266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.812 [2024-12-08 18:39:08.617063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.812 [2024-12-08 18:39:08.617130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.812 [2024-12-08 18:39:08.617149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.812 [2024-12-08 18:39:08.621856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.812 [2024-12-08 18:39:08.621920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.812 [2024-12-08 18:39:08.621940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.812 [2024-12-08 18:39:08.626626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.812 [2024-12-08 18:39:08.626690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.812 [2024-12-08 18:39:08.626710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.812 [2024-12-08 18:39:08.631428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.812 [2024-12-08 18:39:08.631492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.812 [2024-12-08 18:39:08.631512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.812 [2024-12-08 18:39:08.636196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.812 [2024-12-08 18:39:08.636259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.812 [2024-12-08 18:39:08.636278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.812 [2024-12-08 18:39:08.641046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.812 [2024-12-08 18:39:08.641112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.812 [2024-12-08 18:39:08.641132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.812 [2024-12-08 18:39:08.645868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.812 [2024-12-08 18:39:08.645933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.812 [2024-12-08 18:39:08.645953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.812 [2024-12-08 18:39:08.650755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.812 [2024-12-08 18:39:08.650832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.812 [2024-12-08 18:39:08.650852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.812 [2024-12-08 18:39:08.655564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.812 [2024-12-08 18:39:08.655626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.812 [2024-12-08 18:39:08.655646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.812 [2024-12-08 18:39:08.660420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.812 [2024-12-08 18:39:08.660497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.812 [2024-12-08 18:39:08.660517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.812 [2024-12-08 18:39:08.665216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.812 [2024-12-08 18:39:08.665280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.812 [2024-12-08 18:39:08.665300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.812 [2024-12-08 18:39:08.670025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.812 [2024-12-08 18:39:08.670235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.812 [2024-12-08 18:39:08.670255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.812 [2024-12-08 18:39:08.675074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.812 [2024-12-08 18:39:08.675138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.812 [2024-12-08 18:39:08.675158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.812 6118.00 IOPS, 764.75 MiB/s [2024-12-08T18:39:08.742Z] [2024-12-08 18:39:08.680662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x4e8550) with pdu=0x2000198fef90 00:21:50.812 [2024-12-08 18:39:08.680725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.813 [2024-12-08 18:39:08.680745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.813 00:21:50.813 Latency(us) 00:21:50.813 [2024-12-08T18:39:08.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.813 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:50.813 nvme0n1 : 2.00 6116.33 764.54 0.00 0.00 2610.91 1936.29 11617.75 00:21:50.813 [2024-12-08T18:39:08.743Z] =================================================================================================================== 00:21:50.813 [2024-12-08T18:39:08.743Z] Total : 6116.33 764.54 0.00 0.00 2610.91 1936.29 11617.75 00:21:50.813 { 00:21:50.813 "results": [ 00:21:50.813 { 00:21:50.813 "job": "nvme0n1", 00:21:50.813 "core_mask": "0x2", 00:21:50.813 "workload": "randwrite", 00:21:50.813 "status": "finished", 00:21:50.813 "queue_depth": 16, 00:21:50.813 "io_size": 131072, 00:21:50.813 "runtime": 2.003161, 00:21:50.813 "iops": 6116.3331354793745, 00:21:50.813 "mibps": 764.5416419349218, 00:21:50.813 "io_failed": 0, 00:21:50.813 "io_timeout": 0, 00:21:50.813 "avg_latency_us": 2610.914618466744, 00:21:50.813 "min_latency_us": 1936.290909090909, 00:21:50.813 "max_latency_us": 11617.745454545455 00:21:50.813 } 00:21:50.813 ], 00:21:50.813 "core_count": 1 00:21:50.813 } 00:21:50.813 18:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:50.813 18:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:50.813 18:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:50.813 18:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:50.813 | .driver_specific 00:21:50.813 | .nvme_error 00:21:50.813 | .status_code 00:21:50.813 | .command_transient_transport_error' 00:21:51.107 18:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 395 > 0 )) 00:21:51.107 18:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94953 00:21:51.107 18:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 94953 ']' 00:21:51.107 18:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 94953 00:21:51.107 18:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:51.107 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:51.107 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94953 00:21:51.376 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:51.376 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:51.376 killing process with pid 94953 00:21:51.377 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94953' 00:21:51.377 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 94953 00:21:51.377 Received shutdown signal, test time was about 2.000000 seconds 00:21:51.377 00:21:51.377 Latency(us) 00:21:51.377 [2024-12-08T18:39:09.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.377 [2024-12-08T18:39:09.307Z] =================================================================================================================== 00:21:51.377 [2024-12-08T18:39:09.307Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:51.377 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 94953 00:21:51.377 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 94770 00:21:51.377 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 94770 ']' 00:21:51.377 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 94770 00:21:51.377 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:51.377 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:51.377 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94770 00:21:51.636 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:51.636 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:51.636 killing process with pid 94770 00:21:51.636 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94770' 00:21:51.636 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 94770 00:21:51.636 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 94770 00:21:51.636 00:21:51.636 real 0m16.716s 00:21:51.636 user 0m30.844s 00:21:51.636 sys 0m5.647s 00:21:51.636 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:51.636 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:51.636 ************************************ 00:21:51.636 END TEST nvmf_digest_error 00:21:51.636 ************************************ 00:21:51.636 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:51.636 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:21:51.636 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:51.636 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:21:51.895 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:51.895 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:21:51.895 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:51.896 rmmod nvme_tcp 00:21:51.896 rmmod nvme_fabrics 00:21:51.896 rmmod nvme_keyring 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@513 -- # '[' -n 94770 ']' 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # killprocess 94770 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 94770 ']' 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 94770 00:21:51.896 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (94770) - No such process 00:21:51.896 Process with pid 94770 is not found 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 94770 is not found' 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-save 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-restore 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:51.896 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:52.155 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:52.155 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:52.155 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:52.155 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.155 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.155 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.155 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:21:52.155 00:21:52.155 real 0m34.530s 00:21:52.155 user 1m2.484s 00:21:52.155 sys 0m11.703s 00:21:52.155 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:52.155 18:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:52.155 ************************************ 00:21:52.155 END TEST nvmf_digest 00:21:52.155 ************************************ 00:21:52.155 18:39:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:21:52.155 18:39:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:21:52.155 18:39:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:52.155 18:39:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:52.155 18:39:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:52.155 18:39:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.155 ************************************ 00:21:52.155 START TEST nvmf_host_multipath 00:21:52.155 ************************************ 00:21:52.155 18:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:52.155 * Looking for test storage... 00:21:52.155 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:52.155 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:52.155 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:21:52.155 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:52.415 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:52.415 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:52.415 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:52.415 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:52.415 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:21:52.415 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:21:52.415 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:21:52.415 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:21:52.415 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:21:52.415 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:21:52.415 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:21:52.415 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:52.415 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:21:52.415 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:21:52.415 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:52.415 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:52.415 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:21:52.415 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:21:52.415 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:52.415 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:21:52.415 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:21:52.415 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:52.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.416 --rc genhtml_branch_coverage=1 00:21:52.416 --rc genhtml_function_coverage=1 00:21:52.416 --rc genhtml_legend=1 00:21:52.416 --rc geninfo_all_blocks=1 00:21:52.416 --rc geninfo_unexecuted_blocks=1 00:21:52.416 00:21:52.416 ' 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:52.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.416 --rc genhtml_branch_coverage=1 00:21:52.416 --rc genhtml_function_coverage=1 00:21:52.416 --rc genhtml_legend=1 00:21:52.416 --rc geninfo_all_blocks=1 00:21:52.416 --rc geninfo_unexecuted_blocks=1 00:21:52.416 00:21:52.416 ' 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:52.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.416 --rc genhtml_branch_coverage=1 00:21:52.416 --rc genhtml_function_coverage=1 00:21:52.416 --rc genhtml_legend=1 00:21:52.416 --rc geninfo_all_blocks=1 00:21:52.416 --rc geninfo_unexecuted_blocks=1 00:21:52.416 00:21:52.416 ' 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:52.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.416 --rc genhtml_branch_coverage=1 00:21:52.416 --rc genhtml_function_coverage=1 00:21:52.416 --rc genhtml_legend=1 00:21:52.416 --rc geninfo_all_blocks=1 00:21:52.416 --rc geninfo_unexecuted_blocks=1 00:21:52.416 00:21:52.416 ' 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:52.416 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:52.416 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:52.417 Cannot find device "nvmf_init_br" 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:52.417 Cannot find device "nvmf_init_br2" 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:52.417 Cannot find device "nvmf_tgt_br" 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:52.417 Cannot find device "nvmf_tgt_br2" 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:52.417 Cannot find device "nvmf_init_br" 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:52.417 Cannot find device "nvmf_init_br2" 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:52.417 Cannot find device "nvmf_tgt_br" 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:52.417 Cannot find device "nvmf_tgt_br2" 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:52.417 Cannot find device "nvmf_br" 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:52.417 Cannot find device "nvmf_init_if" 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:52.417 Cannot find device "nvmf_init_if2" 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:52.417 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:52.417 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:52.417 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:52.676 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:52.676 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:52.676 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:52.676 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:52.676 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:52.676 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:52.676 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:52.676 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:52.676 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:52.676 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:52.676 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:52.676 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:52.676 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:52.676 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:52.676 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:52.676 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:52.676 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:52.676 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:52.676 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:52.676 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:52.676 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:52.676 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:52.676 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:52.676 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:52.676 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:52.676 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:52.677 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:52.677 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:52.677 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:52.677 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:21:52.677 00:21:52.677 --- 10.0.0.3 ping statistics --- 00:21:52.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.677 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:21:52.677 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:52.677 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:52.677 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:21:52.677 00:21:52.677 --- 10.0.0.4 ping statistics --- 00:21:52.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.677 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:21:52.677 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:52.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:21:52.677 00:21:52.677 --- 10.0.0.1 ping statistics --- 00:21:52.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.677 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:21:52.677 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:52.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:21:52.677 00:21:52.677 --- 10.0.0.2 ping statistics --- 00:21:52.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.677 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:21:52.677 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.677 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@457 -- # return 0 00:21:52.677 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:52.677 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.677 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:52.677 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:52.677 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.677 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:52.677 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:52.677 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:21:52.677 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:52.677 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:52.677 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:52.677 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@505 -- # nvmfpid=95282 00:21:52.677 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:52.677 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@506 -- # waitforlisten 95282 00:21:52.677 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 95282 ']' 00:21:52.677 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.677 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:52.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.677 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.677 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:52.677 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:52.936 [2024-12-08 18:39:10.641842] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:52.936 [2024-12-08 18:39:10.641934] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.936 [2024-12-08 18:39:10.782860] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:52.936 [2024-12-08 18:39:10.856049] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.936 [2024-12-08 18:39:10.856128] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.936 [2024-12-08 18:39:10.856143] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.936 [2024-12-08 18:39:10.856155] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.936 [2024-12-08 18:39:10.856164] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.936 [2024-12-08 18:39:10.856346] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.936 [2024-12-08 18:39:10.856366] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.196 [2024-12-08 18:39:10.919715] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:53.196 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:53.196 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:21:53.196 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:53.196 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:53.196 18:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:53.196 18:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:53.196 18:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=95282 00:21:53.196 18:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:53.457 [2024-12-08 18:39:11.333704] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.457 18:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:54.025 Malloc0 00:21:54.025 18:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:54.025 18:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:54.285 18:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:54.544 [2024-12-08 18:39:12.452014] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:54.544 18:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:54.804 [2024-12-08 18:39:12.668225] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:54.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:54.804 18:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=95326 00:21:54.804 18:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:54.804 18:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:54.804 18:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 95326 /var/tmp/bdevperf.sock 00:21:54.804 18:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 95326 ']' 00:21:54.804 18:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:54.804 18:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:54.804 18:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:54.804 18:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:54.804 18:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:55.741 18:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:55.741 18:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:21:55.741 18:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:56.000 18:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:56.569 Nvme0n1 00:21:56.569 18:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:56.828 Nvme0n1 00:21:56.828 18:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:21:56.828 18:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:57.762 18:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:21:57.762 18:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:58.020 18:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:58.278 18:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:21:58.278 18:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95282 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:58.278 18:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95371 00:21:58.278 18:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:04.848 18:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:04.848 18:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:04.848 18:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:04.848 18:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:04.848 Attaching 4 probes... 00:22:04.848 @path[10.0.0.3, 4421]: 18906 00:22:04.848 @path[10.0.0.3, 4421]: 19368 00:22:04.848 @path[10.0.0.3, 4421]: 19316 00:22:04.848 @path[10.0.0.3, 4421]: 19280 00:22:04.848 @path[10.0.0.3, 4421]: 19310 00:22:04.848 18:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:04.848 18:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:04.848 18:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:04.848 18:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:04.848 18:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:04.848 18:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:04.848 18:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95371 00:22:04.848 18:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:04.848 18:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:22:04.848 18:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:04.848 18:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:05.108 18:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:22:05.108 18:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95491 00:22:05.108 18:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95282 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:05.108 18:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:11.672 18:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:11.672 18:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:11.672 18:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:11.672 18:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:11.672 Attaching 4 probes... 00:22:11.672 @path[10.0.0.3, 4420]: 18650 00:22:11.672 @path[10.0.0.3, 4420]: 19064 00:22:11.672 @path[10.0.0.3, 4420]: 19179 00:22:11.672 @path[10.0.0.3, 4420]: 19318 00:22:11.672 @path[10.0.0.3, 4420]: 19352 00:22:11.672 18:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:11.672 18:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:11.672 18:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:11.672 18:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:11.672 18:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:11.672 18:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:11.672 18:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95491 00:22:11.672 18:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:11.672 18:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:22:11.672 18:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:11.672 18:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:11.932 18:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:22:11.932 18:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95282 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:11.932 18:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95610 00:22:11.932 18:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:18.500 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:18.500 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:18.500 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:18.500 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:18.500 Attaching 4 probes... 00:22:18.500 @path[10.0.0.3, 4421]: 15382 00:22:18.500 @path[10.0.0.3, 4421]: 19150 00:22:18.500 @path[10.0.0.3, 4421]: 19123 00:22:18.500 @path[10.0.0.3, 4421]: 19070 00:22:18.500 @path[10.0.0.3, 4421]: 19086 00:22:18.500 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:18.500 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:18.500 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:18.500 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:18.500 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:18.500 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:18.500 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95610 00:22:18.500 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:18.500 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:22:18.500 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:18.500 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:18.759 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:22:18.759 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95719 00:22:18.759 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95282 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:18.759 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:25.325 18:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:25.325 18:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:22:25.325 18:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:22:25.325 18:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:25.325 Attaching 4 probes... 00:22:25.325 00:22:25.325 00:22:25.325 00:22:25.325 00:22:25.325 00:22:25.325 18:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:25.325 18:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:25.325 18:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:25.325 18:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:22:25.325 18:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:22:25.325 18:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:22:25.325 18:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95719 00:22:25.325 18:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:25.325 18:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:22:25.325 18:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:25.325 18:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:25.583 18:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:22:25.584 18:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95837 00:22:25.584 18:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95282 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:25.584 18:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:32.150 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:32.150 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:32.150 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:32.150 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:32.150 Attaching 4 probes... 00:22:32.150 @path[10.0.0.3, 4421]: 18572 00:22:32.150 @path[10.0.0.3, 4421]: 18976 00:22:32.150 @path[10.0.0.3, 4421]: 18968 00:22:32.150 @path[10.0.0.3, 4421]: 18864 00:22:32.150 @path[10.0.0.3, 4421]: 19041 00:22:32.150 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:32.150 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:32.150 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:32.150 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:32.150 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:32.150 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:32.150 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95837 00:22:32.150 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:32.150 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:32.150 [2024-12-08 18:39:50.016047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861300 is same with the state(6) to be set 00:22:32.150 [2024-12-08 18:39:50.016098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861300 is same with the state(6) to be set 00:22:32.150 [2024-12-08 18:39:50.016121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861300 is same with the state(6) to be set 00:22:32.150 18:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:22:33.531 18:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:22:33.531 18:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95955 00:22:33.531 18:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95282 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:33.531 18:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:40.100 18:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:40.100 18:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:40.100 18:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:40.100 18:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:40.100 Attaching 4 probes... 00:22:40.100 @path[10.0.0.3, 4420]: 18264 00:22:40.100 @path[10.0.0.3, 4420]: 18533 00:22:40.100 @path[10.0.0.3, 4420]: 18474 00:22:40.100 @path[10.0.0.3, 4420]: 18532 00:22:40.100 @path[10.0.0.3, 4420]: 18597 00:22:40.100 18:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:40.100 18:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:40.100 18:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:40.100 18:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:40.100 18:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:40.100 18:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:40.100 18:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95955 00:22:40.100 18:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:40.100 18:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:40.100 [2024-12-08 18:39:57.613582] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:40.100 18:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:40.100 18:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:22:46.666 18:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:22:46.666 18:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96135 00:22:46.666 18:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95282 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:46.666 18:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:52.021 18:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:52.021 18:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:52.280 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:52.280 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:52.280 Attaching 4 probes... 00:22:52.280 @path[10.0.0.3, 4421]: 18903 00:22:52.280 @path[10.0.0.3, 4421]: 18832 00:22:52.280 @path[10.0.0.3, 4421]: 18992 00:22:52.280 @path[10.0.0.3, 4421]: 19139 00:22:52.280 @path[10.0.0.3, 4421]: 19092 00:22:52.280 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:52.280 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:52.280 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:52.543 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:52.543 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:52.543 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:52.543 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96135 00:22:52.543 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:52.543 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 95326 00:22:52.543 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 95326 ']' 00:22:52.543 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 95326 00:22:52.543 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:22:52.543 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:52.543 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95326 00:22:52.543 killing process with pid 95326 00:22:52.543 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:52.543 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:52.543 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95326' 00:22:52.543 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 95326 00:22:52.543 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 95326 00:22:52.543 { 00:22:52.543 "results": [ 00:22:52.543 { 00:22:52.543 "job": "Nvme0n1", 00:22:52.543 "core_mask": "0x4", 00:22:52.543 "workload": "verify", 00:22:52.543 "status": "terminated", 00:22:52.543 "verify_range": { 00:22:52.543 "start": 0, 00:22:52.543 "length": 16384 00:22:52.543 }, 00:22:52.543 "queue_depth": 128, 00:22:52.543 "io_size": 4096, 00:22:52.543 "runtime": 55.571529, 00:22:52.543 "iops": 8112.103591751093, 00:22:52.543 "mibps": 31.687904655277706, 00:22:52.543 "io_failed": 0, 00:22:52.543 "io_timeout": 0, 00:22:52.543 "avg_latency_us": 15750.984783531252, 00:22:52.543 "min_latency_us": 1042.6181818181817, 00:22:52.543 "max_latency_us": 7015926.69090909 00:22:52.543 } 00:22:52.543 ], 00:22:52.543 "core_count": 1 00:22:52.543 } 00:22:52.543 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 95326 00:22:52.543 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:52.543 [2024-12-08 18:39:12.731334] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:52.543 [2024-12-08 18:39:12.731458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95326 ] 00:22:52.543 [2024-12-08 18:39:12.864618] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.543 [2024-12-08 18:39:12.939588] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.543 [2024-12-08 18:39:12.992562] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:52.543 [2024-12-08 18:39:14.527619] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:22:52.543 Running I/O for 90 seconds... 00:22:52.543 9088.00 IOPS, 35.50 MiB/s [2024-12-08T18:40:10.473Z] 9495.50 IOPS, 37.09 MiB/s [2024-12-08T18:40:10.473Z] 9541.00 IOPS, 37.27 MiB/s [2024-12-08T18:40:10.473Z] 9579.75 IOPS, 37.42 MiB/s [2024-12-08T18:40:10.473Z] 9595.00 IOPS, 37.48 MiB/s [2024-12-08T18:40:10.473Z] 9603.83 IOPS, 37.51 MiB/s [2024-12-08T18:40:10.473Z] 9611.29 IOPS, 37.54 MiB/s [2024-12-08T18:40:10.473Z] 9614.88 IOPS, 37.56 MiB/s [2024-12-08T18:40:10.473Z] [2024-12-08 18:39:22.855651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.543 [2024-12-08 18:39:22.855699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:52.543 [2024-12-08 18:39:22.855764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.543 [2024-12-08 18:39:22.855783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:52.543 [2024-12-08 18:39:22.855815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.543 [2024-12-08 18:39:22.855830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:52.543 [2024-12-08 18:39:22.855849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.543 [2024-12-08 18:39:22.855863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:52.543 [2024-12-08 18:39:22.855882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.543 [2024-12-08 18:39:22.855895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.855914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.544 [2024-12-08 18:39:22.855927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.855954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.544 [2024-12-08 18:39:22.855990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.856041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.544 [2024-12-08 18:39:22.856056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.856077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.544 [2024-12-08 18:39:22.856092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.856134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.544 [2024-12-08 18:39:22.856151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.856172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.544 [2024-12-08 18:39:22.856187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.856208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.544 [2024-12-08 18:39:22.856227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.856247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.544 [2024-12-08 18:39:22.856263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.856284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:104224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.544 [2024-12-08 18:39:22.856299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.856356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:104232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.544 [2024-12-08 18:39:22.856385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.856420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.544 [2024-12-08 18:39:22.856433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.856452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.544 [2024-12-08 18:39:22.856466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.856486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.544 [2024-12-08 18:39:22.856500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.856533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.544 [2024-12-08 18:39:22.856548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.856567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.544 [2024-12-08 18:39:22.856580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.856599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.544 [2024-12-08 18:39:22.856612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.856641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.544 [2024-12-08 18:39:22.856672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.856691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.544 [2024-12-08 18:39:22.856705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.856725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.544 [2024-12-08 18:39:22.856739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.856780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.544 [2024-12-08 18:39:22.856807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.856828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.544 [2024-12-08 18:39:22.856842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.856862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.544 [2024-12-08 18:39:22.856876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.856895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.544 [2024-12-08 18:39:22.856909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.856928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:104280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.544 [2024-12-08 18:39:22.856943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.856963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.544 [2024-12-08 18:39:22.856977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.856996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.544 [2024-12-08 18:39:22.857010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.857029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.544 [2024-12-08 18:39:22.857044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.857063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.544 [2024-12-08 18:39:22.857077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.857106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.544 [2024-12-08 18:39:22.857121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.857141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.544 [2024-12-08 18:39:22.857155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.857175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:104336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.544 [2024-12-08 18:39:22.857189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.857208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.544 [2024-12-08 18:39:22.857222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.857242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.544 [2024-12-08 18:39:22.857256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.857275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.544 [2024-12-08 18:39:22.857289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.857308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:104368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.544 [2024-12-08 18:39:22.857322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.857342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.544 [2024-12-08 18:39:22.857356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.857375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.544 [2024-12-08 18:39:22.857389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.857408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.544 [2024-12-08 18:39:22.857434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:52.544 [2024-12-08 18:39:22.857455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.857470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.857489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.857503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.857523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.857558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.857579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.857593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.857613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.857627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.857647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.857661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.857681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.857695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.857715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.857729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.857748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:103888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.857762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.857781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.857796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.857815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.857830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.857849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.857863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.857882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.857896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.857916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.857930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.857949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.857969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.857990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.858004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.858024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.858038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.858057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.858078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.858098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.858112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.858131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.858145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.858166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.858180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.858216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:104376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.545 [2024-12-08 18:39:22.858235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.858256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.545 [2024-12-08 18:39:22.858271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.858290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.545 [2024-12-08 18:39:22.858305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.858324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.545 [2024-12-08 18:39:22.858338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.858357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.545 [2024-12-08 18:39:22.858371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.858390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.545 [2024-12-08 18:39:22.858417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.858448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.545 [2024-12-08 18:39:22.858463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.858483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.545 [2024-12-08 18:39:22.858496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.858516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:103992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.858530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.858550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:104000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.858564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.858583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:104008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.858596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.858615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:104016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.858629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.858649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:104024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.858665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.858685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:104032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.858699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.858718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.858732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.858751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:104048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.545 [2024-12-08 18:39:22.858765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.858784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.545 [2024-12-08 18:39:22.858798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:52.545 [2024-12-08 18:39:22.858824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.545 [2024-12-08 18:39:22.858839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.858865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.858879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.858899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.858913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.858932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.858946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.858965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.858979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.858998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.859012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.859031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.859045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.859063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.859077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.859097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.859110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.859130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.859143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.859162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.859176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.859195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.859211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.859231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.859245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.859271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.859286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.859306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.859320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.859343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.859358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.859383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.859398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.859451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.859466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.859485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.859499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.859519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.859533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.859552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.859565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.859584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.859598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.859617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.859631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.859650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.546 [2024-12-08 18:39:22.859664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.859683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.546 [2024-12-08 18:39:22.859697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.859716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.546 [2024-12-08 18:39:22.859741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.859762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.546 [2024-12-08 18:39:22.859776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.859805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.546 [2024-12-08 18:39:22.859828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.859848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.546 [2024-12-08 18:39:22.859862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.859881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.546 [2024-12-08 18:39:22.859895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.861301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.546 [2024-12-08 18:39:22.861330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.861357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.861374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.861413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:104640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.861432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.861453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.861467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.861487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.861501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.861520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.861534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.861554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.861568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.861587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.861606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.861833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.546 [2024-12-08 18:39:22.861858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:52.546 [2024-12-08 18:39:22.861882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.547 [2024-12-08 18:39:22.861898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:22.861918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.547 [2024-12-08 18:39:22.861932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:22.861951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.547 [2024-12-08 18:39:22.861965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:22.861984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.547 [2024-12-08 18:39:22.861998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:22.862018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.547 [2024-12-08 18:39:22.862032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:22.862052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.547 [2024-12-08 18:39:22.862066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:22.862086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.547 [2024-12-08 18:39:22.862100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:22.862123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.547 [2024-12-08 18:39:22.862138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:52.547 9579.56 IOPS, 37.42 MiB/s [2024-12-08T18:40:10.477Z] 9574.40 IOPS, 37.40 MiB/s [2024-12-08T18:40:10.477Z] 9572.36 IOPS, 37.39 MiB/s [2024-12-08T18:40:10.477Z] 9575.33 IOPS, 37.40 MiB/s [2024-12-08T18:40:10.477Z] 9576.62 IOPS, 37.41 MiB/s [2024-12-08T18:40:10.477Z] 9581.14 IOPS, 37.43 MiB/s [2024-12-08T18:40:10.477Z] [2024-12-08 18:39:29.445716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.547 [2024-12-08 18:39:29.445778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:29.445846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.547 [2024-12-08 18:39:29.445865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:29.445886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.547 [2024-12-08 18:39:29.445921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:29.445942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.547 [2024-12-08 18:39:29.445956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:29.445992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.547 [2024-12-08 18:39:29.446006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:29.446025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.547 [2024-12-08 18:39:29.446039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:29.446058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.547 [2024-12-08 18:39:29.446071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:29.446091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.547 [2024-12-08 18:39:29.446105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:29.446128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.547 [2024-12-08 18:39:29.446143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:29.446163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.547 [2024-12-08 18:39:29.446176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:29.446195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.547 [2024-12-08 18:39:29.446209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:29.446229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.547 [2024-12-08 18:39:29.446242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:29.446261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.547 [2024-12-08 18:39:29.446275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:29.446309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.547 [2024-12-08 18:39:29.446323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:29.446342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.547 [2024-12-08 18:39:29.446364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:29.446385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.547 [2024-12-08 18:39:29.446399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:29.446419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.547 [2024-12-08 18:39:29.446432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:29.446464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.547 [2024-12-08 18:39:29.446482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:29.446501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.547 [2024-12-08 18:39:29.446514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:29.446533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.547 [2024-12-08 18:39:29.446547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:29.446566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.547 [2024-12-08 18:39:29.446579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:29.446598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.547 [2024-12-08 18:39:29.446611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:29.446630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.547 [2024-12-08 18:39:29.446643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:29.446662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.547 [2024-12-08 18:39:29.446676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:29.446870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.547 [2024-12-08 18:39:29.446894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:52.547 [2024-12-08 18:39:29.446919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.547 [2024-12-08 18:39:29.446934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.446955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.548 [2024-12-08 18:39:29.446968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.446998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.548 [2024-12-08 18:39:29.447013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.447033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.548 [2024-12-08 18:39:29.447046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.447067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.548 [2024-12-08 18:39:29.447080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.447100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.548 [2024-12-08 18:39:29.447114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.447134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.548 [2024-12-08 18:39:29.447147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.447168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.548 [2024-12-08 18:39:29.447182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.447202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.548 [2024-12-08 18:39:29.447216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.447235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.548 [2024-12-08 18:39:29.447249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.447269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.548 [2024-12-08 18:39:29.447282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.447303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.548 [2024-12-08 18:39:29.447316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.447336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.548 [2024-12-08 18:39:29.447349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.447369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.548 [2024-12-08 18:39:29.447383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.447439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.548 [2024-12-08 18:39:29.447457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.447478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.548 [2024-12-08 18:39:29.447492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.447512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.548 [2024-12-08 18:39:29.447526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.447546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.548 [2024-12-08 18:39:29.447560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.447580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.548 [2024-12-08 18:39:29.447594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.447614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.548 [2024-12-08 18:39:29.447628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.447648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.548 [2024-12-08 18:39:29.447662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.447682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.548 [2024-12-08 18:39:29.447695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.447733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.548 [2024-12-08 18:39:29.447762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.447784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.548 [2024-12-08 18:39:29.447835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.447858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.548 [2024-12-08 18:39:29.447872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.447894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.548 [2024-12-08 18:39:29.447908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.447936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.548 [2024-12-08 18:39:29.447960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.447981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.548 [2024-12-08 18:39:29.447996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.448017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.548 [2024-12-08 18:39:29.448030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.448051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.548 [2024-12-08 18:39:29.448065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.448086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.548 [2024-12-08 18:39:29.448100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.448150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.548 [2024-12-08 18:39:29.448170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.448191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.548 [2024-12-08 18:39:29.448205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.448226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.548 [2024-12-08 18:39:29.448240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.448260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.548 [2024-12-08 18:39:29.448273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.448294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.548 [2024-12-08 18:39:29.448308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.448328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.548 [2024-12-08 18:39:29.448342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.548 [2024-12-08 18:39:29.448362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.549 [2024-12-08 18:39:29.448375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:52.814 [2024-12-08 18:39:29.448396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.814 [2024-12-08 18:39:29.448417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:52.814 [2024-12-08 18:39:29.448439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.814 [2024-12-08 18:39:29.448453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:52.814 [2024-12-08 18:39:29.448488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.814 [2024-12-08 18:39:29.448502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:52.814 [2024-12-08 18:39:29.448523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.814 [2024-12-08 18:39:29.448537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:52.814 [2024-12-08 18:39:29.448557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.814 [2024-12-08 18:39:29.448571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:52.814 [2024-12-08 18:39:29.448591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.814 [2024-12-08 18:39:29.448605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:52.814 [2024-12-08 18:39:29.448626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.814 [2024-12-08 18:39:29.448639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:52.814 [2024-12-08 18:39:29.448660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.814 [2024-12-08 18:39:29.448673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:52.814 [2024-12-08 18:39:29.448693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.814 [2024-12-08 18:39:29.448707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:52.814 [2024-12-08 18:39:29.448728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.814 [2024-12-08 18:39:29.448741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:52.814 [2024-12-08 18:39:29.448761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.814 [2024-12-08 18:39:29.448775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:52.814 [2024-12-08 18:39:29.448796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.814 [2024-12-08 18:39:29.448809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:52.814 [2024-12-08 18:39:29.448830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.814 [2024-12-08 18:39:29.448850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:52.814 [2024-12-08 18:39:29.448871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.814 [2024-12-08 18:39:29.448886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:52.814 [2024-12-08 18:39:29.448906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.815 [2024-12-08 18:39:29.448920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.448940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.815 [2024-12-08 18:39:29.448954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.448974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.815 [2024-12-08 18:39:29.448988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.449013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.815 [2024-12-08 18:39:29.449028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.449049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.815 [2024-12-08 18:39:29.449063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.449084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.815 [2024-12-08 18:39:29.449097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.449117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.815 [2024-12-08 18:39:29.449131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.449151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.815 [2024-12-08 18:39:29.449165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.449186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.815 [2024-12-08 18:39:29.449199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.449220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.815 [2024-12-08 18:39:29.449233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.449253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.815 [2024-12-08 18:39:29.449267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.449294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.815 [2024-12-08 18:39:29.449308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.449328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.815 [2024-12-08 18:39:29.449342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.449362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.815 [2024-12-08 18:39:29.449376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.449396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.815 [2024-12-08 18:39:29.449425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.449448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.815 [2024-12-08 18:39:29.449462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.449482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.815 [2024-12-08 18:39:29.449496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.449516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.815 [2024-12-08 18:39:29.449530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.449550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.815 [2024-12-08 18:39:29.449564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.449585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.815 [2024-12-08 18:39:29.449599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.449620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.815 [2024-12-08 18:39:29.449634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.449655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.815 [2024-12-08 18:39:29.449669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.449689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.815 [2024-12-08 18:39:29.449703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.449731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.815 [2024-12-08 18:39:29.449745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.449766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.815 [2024-12-08 18:39:29.449779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.449800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.815 [2024-12-08 18:39:29.449814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.449834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.815 [2024-12-08 18:39:29.449848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.449868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.815 [2024-12-08 18:39:29.449882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.449903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.815 [2024-12-08 18:39:29.449916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.449937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.815 [2024-12-08 18:39:29.449950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.449971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.815 [2024-12-08 18:39:29.449985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.450005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.815 [2024-12-08 18:39:29.450018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.450039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.815 [2024-12-08 18:39:29.450052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.450073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.815 [2024-12-08 18:39:29.450087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.450107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.815 [2024-12-08 18:39:29.450121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.450161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.815 [2024-12-08 18:39:29.450189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.450212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.815 [2024-12-08 18:39:29.450227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.450247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.815 [2024-12-08 18:39:29.450261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:52.815 [2024-12-08 18:39:29.450282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.815 [2024-12-08 18:39:29.450296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:29.450317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.816 [2024-12-08 18:39:29.450330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:29.450351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.816 [2024-12-08 18:39:29.450364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:29.450385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.816 [2024-12-08 18:39:29.450399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:29.450434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.816 [2024-12-08 18:39:29.450448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:29.450469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.816 [2024-12-08 18:39:29.450482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:29.450503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.816 [2024-12-08 18:39:29.450517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:29.450537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.816 [2024-12-08 18:39:29.450551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:29.450571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.816 [2024-12-08 18:39:29.450585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:29.450606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.816 [2024-12-08 18:39:29.450623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:29.450647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.816 [2024-12-08 18:39:29.450661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:29.450682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.816 [2024-12-08 18:39:29.450695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:29.450716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.816 [2024-12-08 18:39:29.450729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:52.816 9417.60 IOPS, 36.79 MiB/s [2024-12-08T18:40:10.746Z] 8973.94 IOPS, 35.05 MiB/s [2024-12-08T18:40:10.746Z] 9008.76 IOPS, 35.19 MiB/s [2024-12-08T18:40:10.746Z] 9039.50 IOPS, 35.31 MiB/s [2024-12-08T18:40:10.746Z] 9066.89 IOPS, 35.42 MiB/s [2024-12-08T18:40:10.746Z] 9091.05 IOPS, 35.51 MiB/s [2024-12-08T18:40:10.746Z] 9112.71 IOPS, 35.60 MiB/s [2024-12-08T18:40:10.746Z] [2024-12-08 18:39:36.580562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.816 [2024-12-08 18:39:36.580632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:36.580686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.816 [2024-12-08 18:39:36.580704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:36.580725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.816 [2024-12-08 18:39:36.580739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:36.580757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.816 [2024-12-08 18:39:36.580771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:36.580801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.816 [2024-12-08 18:39:36.580814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:36.580832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.816 [2024-12-08 18:39:36.580846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:36.580864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.816 [2024-12-08 18:39:36.580878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:36.580896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.816 [2024-12-08 18:39:36.580909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:36.580952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.816 [2024-12-08 18:39:36.580968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:36.580986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.816 [2024-12-08 18:39:36.580999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:36.581018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.816 [2024-12-08 18:39:36.581031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:36.581049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.816 [2024-12-08 18:39:36.581062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:36.581080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.816 [2024-12-08 18:39:36.581093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:36.581110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.816 [2024-12-08 18:39:36.581123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:36.581141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.816 [2024-12-08 18:39:36.581154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:36.581172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.816 [2024-12-08 18:39:36.581185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:36.581216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.816 [2024-12-08 18:39:36.581234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:36.581255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.816 [2024-12-08 18:39:36.581268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:36.581286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.816 [2024-12-08 18:39:36.581299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:36.581317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.816 [2024-12-08 18:39:36.581330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:36.581348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.816 [2024-12-08 18:39:36.581372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:36.581391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.816 [2024-12-08 18:39:36.581418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:36.581439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.816 [2024-12-08 18:39:36.581454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:36.581472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.816 [2024-12-08 18:39:36.581486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:36.581504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.816 [2024-12-08 18:39:36.581517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:52.816 [2024-12-08 18:39:36.581535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.817 [2024-12-08 18:39:36.581549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.581568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.817 [2024-12-08 18:39:36.581582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.581601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.817 [2024-12-08 18:39:36.581614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.581632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.817 [2024-12-08 18:39:36.581646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.581664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.817 [2024-12-08 18:39:36.581678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.581696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.817 [2024-12-08 18:39:36.581711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.581729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.817 [2024-12-08 18:39:36.581742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.581760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-12-08 18:39:36.581784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.581805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-12-08 18:39:36.581819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.581838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-12-08 18:39:36.581851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.581869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-12-08 18:39:36.581882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.581901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-12-08 18:39:36.581914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.581932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-12-08 18:39:36.581945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.581963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-12-08 18:39:36.581976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.581994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-12-08 18:39:36.582008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.582040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.817 [2024-12-08 18:39:36.582057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.582076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.817 [2024-12-08 18:39:36.582090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.582108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.817 [2024-12-08 18:39:36.582121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.582139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.817 [2024-12-08 18:39:36.582152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.582171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.817 [2024-12-08 18:39:36.582184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.582211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.817 [2024-12-08 18:39:36.582225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.582243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.817 [2024-12-08 18:39:36.582256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.582274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.817 [2024-12-08 18:39:36.582288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.582306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.817 [2024-12-08 18:39:36.582320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.582340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.817 [2024-12-08 18:39:36.582354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.582373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.817 [2024-12-08 18:39:36.582386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.582416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.817 [2024-12-08 18:39:36.582432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.582450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.817 [2024-12-08 18:39:36.582464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.582482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.817 [2024-12-08 18:39:36.582496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.582513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.817 [2024-12-08 18:39:36.582526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.582544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.817 [2024-12-08 18:39:36.582557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.582579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.817 [2024-12-08 18:39:36.582593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.582618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.817 [2024-12-08 18:39:36.582633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.582651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-12-08 18:39:36.582664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.582682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-12-08 18:39:36.582696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.582714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-12-08 18:39:36.582727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.582745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-12-08 18:39:36.582758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.582776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-12-08 18:39:36.582790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.582808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-12-08 18:39:36.582822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:52.817 [2024-12-08 18:39:36.582840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-12-08 18:39:36.582853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.582872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-12-08 18:39:36.582885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.582905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.818 [2024-12-08 18:39:36.582918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.582936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.818 [2024-12-08 18:39:36.582950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.582968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.818 [2024-12-08 18:39:36.582981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.583015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.818 [2024-12-08 18:39:36.583036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.583056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.818 [2024-12-08 18:39:36.583070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.583090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.818 [2024-12-08 18:39:36.583104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.583413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.818 [2024-12-08 18:39:36.583450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.583479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.818 [2024-12-08 18:39:36.583495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.583518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.818 [2024-12-08 18:39:36.583532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.583555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.818 [2024-12-08 18:39:36.583569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.583592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.818 [2024-12-08 18:39:36.583606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.583629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.818 [2024-12-08 18:39:36.583643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.583665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.818 [2024-12-08 18:39:36.583679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.583701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.818 [2024-12-08 18:39:36.583715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.583738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.818 [2024-12-08 18:39:36.583752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.583775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.818 [2024-12-08 18:39:36.583837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.583864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.818 [2024-12-08 18:39:36.583879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.583902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.818 [2024-12-08 18:39:36.583916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.583940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.818 [2024-12-08 18:39:36.583954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.583977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-12-08 18:39:36.583991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.584015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-12-08 18:39:36.584029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.584052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-12-08 18:39:36.584066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.584090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-12-08 18:39:36.584110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.584134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-12-08 18:39:36.584148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.584171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-12-08 18:39:36.584185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.584223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-12-08 18:39:36.584237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.584260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-12-08 18:39:36.584274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.584296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-12-08 18:39:36.584310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.584340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-12-08 18:39:36.584354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.584376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-12-08 18:39:36.584390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.584413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-12-08 18:39:36.584426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.584470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-12-08 18:39:36.584487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.584510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-12-08 18:39:36.584524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.584547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-12-08 18:39:36.584560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.584583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-12-08 18:39:36.584597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.584620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-12-08 18:39:36.584633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:52.818 [2024-12-08 18:39:36.584655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-12-08 18:39:36.584669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:36.584691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-12-08 18:39:36.584705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:36.584728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-12-08 18:39:36.584742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:36.584791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-12-08 18:39:36.584809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:36.584842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-12-08 18:39:36.584857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:36.584879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-12-08 18:39:36.584893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:36.584915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-12-08 18:39:36.584929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:36.584952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.819 [2024-12-08 18:39:36.584965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:36.584988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.819 [2024-12-08 18:39:36.585002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:36.585024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.819 [2024-12-08 18:39:36.585038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:36.585061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.819 [2024-12-08 18:39:36.585075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:36.585122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.819 [2024-12-08 18:39:36.585139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:36.585162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.819 [2024-12-08 18:39:36.585175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:36.585198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.819 [2024-12-08 18:39:36.585212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:36.585235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.819 [2024-12-08 18:39:36.585249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:36.585271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.819 [2024-12-08 18:39:36.585285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:36.585308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.819 [2024-12-08 18:39:36.585329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:36.585352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.819 [2024-12-08 18:39:36.585367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:36.585389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-12-08 18:39:36.585417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:36.585444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-12-08 18:39:36.585458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:36.585481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-12-08 18:39:36.585495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:36.585518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-12-08 18:39:36.585531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:36.585554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-12-08 18:39:36.585568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:36.585591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-12-08 18:39:36.585605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:36.585627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-12-08 18:39:36.585641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:36.585663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-12-08 18:39:36.585678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:52.819 9076.82 IOPS, 35.46 MiB/s [2024-12-08T18:40:10.749Z] 8682.17 IOPS, 33.91 MiB/s [2024-12-08T18:40:10.749Z] 8320.42 IOPS, 32.50 MiB/s [2024-12-08T18:40:10.749Z] 7987.60 IOPS, 31.20 MiB/s [2024-12-08T18:40:10.749Z] 7680.38 IOPS, 30.00 MiB/s [2024-12-08T18:40:10.749Z] 7395.93 IOPS, 28.89 MiB/s [2024-12-08T18:40:10.749Z] 7131.79 IOPS, 27.86 MiB/s [2024-12-08T18:40:10.749Z] 6920.24 IOPS, 27.03 MiB/s [2024-12-08T18:40:10.749Z] 7001.03 IOPS, 27.35 MiB/s [2024-12-08T18:40:10.749Z] 7080.10 IOPS, 27.66 MiB/s [2024-12-08T18:40:10.749Z] 7155.84 IOPS, 27.95 MiB/s [2024-12-08T18:40:10.749Z] 7224.58 IOPS, 28.22 MiB/s [2024-12-08T18:40:10.749Z] 7292.79 IOPS, 28.49 MiB/s [2024-12-08T18:40:10.749Z] 7354.37 IOPS, 28.73 MiB/s [2024-12-08T18:40:10.749Z] [2024-12-08 18:39:50.016232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:116952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.819 [2024-12-08 18:39:50.016272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:50.016339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:116960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.819 [2024-12-08 18:39:50.016359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:50.016381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:116968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.819 [2024-12-08 18:39:50.016395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:50.016415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:116976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.819 [2024-12-08 18:39:50.016428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:50.016458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:116984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.819 [2024-12-08 18:39:50.016474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:52.819 [2024-12-08 18:39:50.016509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:116992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.820 [2024-12-08 18:39:50.016523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.016541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:117000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.820 [2024-12-08 18:39:50.016555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.016574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:117008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.820 [2024-12-08 18:39:50.016587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.016613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:117016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.820 [2024-12-08 18:39:50.016626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.016645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:117024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.820 [2024-12-08 18:39:50.016659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.016677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:117032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.820 [2024-12-08 18:39:50.016691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.016711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:117040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.820 [2024-12-08 18:39:50.016724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.016743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:117048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.820 [2024-12-08 18:39:50.016756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.016930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.820 [2024-12-08 18:39:50.016965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.016989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:117064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.820 [2024-12-08 18:39:50.017004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:117072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.820 [2024-12-08 18:39:50.017038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:116632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-12-08 18:39:50.017073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:116640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-12-08 18:39:50.017110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:116648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-12-08 18:39:50.017145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:116656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-12-08 18:39:50.017208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:116664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-12-08 18:39:50.017254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-12-08 18:39:50.017286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:116680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-12-08 18:39:50.017317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:116688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-12-08 18:39:50.017349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:116600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-12-08 18:39:50.017430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:116608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-12-08 18:39:50.017482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:116616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-12-08 18:39:50.017511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:116624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-12-08 18:39:50.017537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:117080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.820 [2024-12-08 18:39:50.017564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:117088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.820 [2024-12-08 18:39:50.017590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:117096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.820 [2024-12-08 18:39:50.017616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:117104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.820 [2024-12-08 18:39:50.017642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:117112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.820 [2024-12-08 18:39:50.017668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:117120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.820 [2024-12-08 18:39:50.017694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:117128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.820 [2024-12-08 18:39:50.017720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.820 [2024-12-08 18:39:50.017746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.820 [2024-12-08 18:39:50.017772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:117152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.820 [2024-12-08 18:39:50.017798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:117160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.820 [2024-12-08 18:39:50.017845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:117168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.820 [2024-12-08 18:39:50.017871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:117176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.820 [2024-12-08 18:39:50.017896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.820 [2024-12-08 18:39:50.017921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.820 [2024-12-08 18:39:50.017947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:117200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.820 [2024-12-08 18:39:50.017972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-12-08 18:39:50.017987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-12-08 18:39:50.017999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:116704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-12-08 18:39:50.018024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:116712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-12-08 18:39:50.018050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:116720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-12-08 18:39:50.018077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-12-08 18:39:50.018102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:116736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-12-08 18:39:50.018127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:116744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-12-08 18:39:50.018158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:116752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-12-08 18:39:50.018184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-12-08 18:39:50.018209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:116768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-12-08 18:39:50.018234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:116776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-12-08 18:39:50.018275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-12-08 18:39:50.018301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:116792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-12-08 18:39:50.018327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:116800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-12-08 18:39:50.018353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-12-08 18:39:50.018378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:116816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-12-08 18:39:50.018404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.821 [2024-12-08 18:39:50.018438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:117216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.821 [2024-12-08 18:39:50.018467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:117224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.821 [2024-12-08 18:39:50.018493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.821 [2024-12-08 18:39:50.018525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.821 [2024-12-08 18:39:50.018552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:117248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.821 [2024-12-08 18:39:50.018579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:117256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.821 [2024-12-08 18:39:50.018605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.821 [2024-12-08 18:39:50.018631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.821 [2024-12-08 18:39:50.018657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.821 [2024-12-08 18:39:50.018697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.821 [2024-12-08 18:39:50.018722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.821 [2024-12-08 18:39:50.018747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:117304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.821 [2024-12-08 18:39:50.018772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:117312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.821 [2024-12-08 18:39:50.018797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:117320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.821 [2024-12-08 18:39:50.018822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.821 [2024-12-08 18:39:50.018848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:116824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-12-08 18:39:50.018878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-12-08 18:39:50.018904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:116840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-12-08 18:39:50.018929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:116848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-12-08 18:39:50.018955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:116856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-12-08 18:39:50.018980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.018994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:116864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-12-08 18:39:50.019006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.019020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:116872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-12-08 18:39:50.019032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-12-08 18:39:50.019045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:116880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-12-08 18:39:50.019057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.822 [2024-12-08 18:39:50.019082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:117344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.822 [2024-12-08 18:39:50.019107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:117352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.822 [2024-12-08 18:39:50.019132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:117360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.822 [2024-12-08 18:39:50.019157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.822 [2024-12-08 18:39:50.019191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:117376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.822 [2024-12-08 18:39:50.019217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:117384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.822 [2024-12-08 18:39:50.019242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.822 [2024-12-08 18:39:50.019267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:117400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.822 [2024-12-08 18:39:50.019292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:117408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.822 [2024-12-08 18:39:50.019318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:117416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.822 [2024-12-08 18:39:50.019343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:117424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.822 [2024-12-08 18:39:50.019369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.822 [2024-12-08 18:39:50.019394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:117440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.822 [2024-12-08 18:39:50.019449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:117448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.822 [2024-12-08 18:39:50.019476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:117456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.822 [2024-12-08 18:39:50.019502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:116888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-12-08 18:39:50.019527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:52.822 SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:116896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-12-08 18:39:50.019560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:116904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-12-08 18:39:50.019585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:116912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-12-08 18:39:50.019611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-12-08 18:39:50.019637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:116928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-12-08 18:39:50.019662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:116936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-12-08 18:39:50.019688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c0860 is same with the state(6) to be set 00:22:52.822 [2024-12-08 18:39:50.019715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.822 [2024-12-08 18:39:50.019724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.822 [2024-12-08 18:39:50.019734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116944 len:8 PRP1 0x0 PRP2 0x0 00:22:52.822 [2024-12-08 18:39:50.019746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.822 [2024-12-08 18:39:50.019768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.822 [2024-12-08 18:39:50.019777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117464 len:8 PRP1 0x0 PRP2 0x0 00:22:52.822 [2024-12-08 18:39:50.019811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.822 [2024-12-08 18:39:50.019833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.822 [2024-12-08 18:39:50.019843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117472 len:8 PRP1 0x0 PRP2 0x0 00:22:52.822 [2024-12-08 18:39:50.019860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.822 [2024-12-08 18:39:50.019881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.822 [2024-12-08 18:39:50.019890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117480 len:8 PRP1 0x0 PRP2 0x0 00:22:52.822 [2024-12-08 18:39:50.019908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.822 [2024-12-08 18:39:50.019929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.822 [2024-12-08 18:39:50.019938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117488 len:8 PRP1 0x0 PRP2 0x0 00:22:52.822 [2024-12-08 18:39:50.019949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.019961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.822 [2024-12-08 18:39:50.019969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.822 [2024-12-08 18:39:50.019978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117496 len:8 PRP1 0x0 PRP2 0x0 00:22:52.822 [2024-12-08 18:39:50.019989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.020001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.822 [2024-12-08 18:39:50.020010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.822 [2024-12-08 18:39:50.020018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117504 len:8 PRP1 0x0 PRP2 0x0 00:22:52.822 [2024-12-08 18:39:50.020030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.020041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.822 [2024-12-08 18:39:50.020050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.822 [2024-12-08 18:39:50.020059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117512 len:8 PRP1 0x0 PRP2 0x0 00:22:52.822 [2024-12-08 18:39:50.020096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.020107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.822 [2024-12-08 18:39:50.020115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.822 [2024-12-08 18:39:50.020123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117520 len:8 PRP1 0x0 PRP2 0x0 00:22:52.822 [2024-12-08 18:39:50.020134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-12-08 18:39:50.020145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.823 [2024-12-08 18:39:50.020154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.823 [2024-12-08 18:39:50.020162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117528 len:8 PRP1 0x0 PRP2 0x0 00:22:52.823 [2024-12-08 18:39:50.020173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-12-08 18:39:50.020184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.823 [2024-12-08 18:39:50.020193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.823 [2024-12-08 18:39:50.020201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117536 len:8 PRP1 0x0 PRP2 0x0 00:22:52.823 [2024-12-08 18:39:50.020220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-12-08 18:39:50.020232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.823 [2024-12-08 18:39:50.020241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.823 [2024-12-08 18:39:50.020254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117544 len:8 PRP1 0x0 PRP2 0x0 00:22:52.823 [2024-12-08 18:39:50.020266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-12-08 18:39:50.020278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.823 [2024-12-08 18:39:50.020286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.823 [2024-12-08 18:39:50.020295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117552 len:8 PRP1 0x0 PRP2 0x0 00:22:52.823 [2024-12-08 18:39:50.020306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-12-08 18:39:50.020317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.823 [2024-12-08 18:39:50.020326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.823 [2024-12-08 18:39:50.020335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117560 len:8 PRP1 0x0 PRP2 0x0 00:22:52.823 [2024-12-08 18:39:50.020346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-12-08 18:39:50.020357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.823 [2024-12-08 18:39:50.020366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.823 [2024-12-08 18:39:50.020374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117568 len:8 PRP1 0x0 PRP2 0x0 00:22:52.823 [2024-12-08 18:39:50.020385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-12-08 18:39:50.020397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.823 [2024-12-08 18:39:50.020405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.823 [2024-12-08 18:39:50.020414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117576 len:8 PRP1 0x0 PRP2 0x0 00:22:52.823 [2024-12-08 18:39:50.020439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-12-08 18:39:50.020451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.823 [2024-12-08 18:39:50.020460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.823 [2024-12-08 18:39:50.020469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117584 len:8 PRP1 0x0 PRP2 0x0 00:22:52.823 [2024-12-08 18:39:50.020480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-12-08 18:39:50.020491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.823 [2024-12-08 18:39:50.020500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.823 [2024-12-08 18:39:50.020509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117592 len:8 PRP1 0x0 PRP2 0x0 00:22:52.823 [2024-12-08 18:39:50.020520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-12-08 18:39:50.020531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.823 [2024-12-08 18:39:50.020540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.823 [2024-12-08 18:39:50.020549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117600 len:8 PRP1 0x0 PRP2 0x0 00:22:52.823 [2024-12-08 18:39:50.020566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-12-08 18:39:50.020578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.823 [2024-12-08 18:39:50.020592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.823 [2024-12-08 18:39:50.020602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117608 len:8 PRP1 0x0 PRP2 0x0 00:22:52.823 [2024-12-08 18:39:50.020613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-12-08 18:39:50.020625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.823 [2024-12-08 18:39:50.020633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.823 [2024-12-08 18:39:50.020642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117616 len:8 PRP1 0x0 PRP2 0x0 00:22:52.823 [2024-12-08 18:39:50.020653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-12-08 18:39:50.020704] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10c0860 was disconnected and freed. reset controller. 00:22:52.823 [2024-12-08 18:39:50.020790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.823 [2024-12-08 18:39:50.020813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-12-08 18:39:50.020827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.823 [2024-12-08 18:39:50.020839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-12-08 18:39:50.020851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.823 [2024-12-08 18:39:50.020862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-12-08 18:39:50.020874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.823 [2024-12-08 18:39:50.020886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-12-08 18:39:50.020899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-12-08 18:39:50.020911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-12-08 18:39:50.020928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107d4a0 is same with the state(6) to be set 00:22:52.823 [2024-12-08 18:39:50.021912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:52.823 [2024-12-08 18:39:50.021947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107d4a0 (9): Bad file descriptor 00:22:52.823 [2024-12-08 18:39:50.022316] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:52.823 [2024-12-08 18:39:50.022345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107d4a0 with addr=10.0.0.3, port=4421 00:22:52.823 [2024-12-08 18:39:50.022360] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107d4a0 is same with the state(6) to be set 00:22:52.823 [2024-12-08 18:39:50.022442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107d4a0 (9): Bad file descriptor 00:22:52.823 [2024-12-08 18:39:50.022476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:52.823 [2024-12-08 18:39:50.022490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:52.823 [2024-12-08 18:39:50.022513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:52.823 [2024-12-08 18:39:50.022542] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:52.823 [2024-12-08 18:39:50.022558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:52.823 7406.11 IOPS, 28.93 MiB/s [2024-12-08T18:40:10.753Z] 7448.76 IOPS, 29.10 MiB/s [2024-12-08T18:40:10.753Z] 7495.26 IOPS, 29.28 MiB/s [2024-12-08T18:40:10.753Z] 7542.67 IOPS, 29.46 MiB/s [2024-12-08T18:40:10.753Z] 7585.10 IOPS, 29.63 MiB/s [2024-12-08T18:40:10.753Z] 7626.05 IOPS, 29.79 MiB/s [2024-12-08T18:40:10.753Z] 7665.81 IOPS, 29.94 MiB/s [2024-12-08T18:40:10.753Z] 7699.42 IOPS, 30.08 MiB/s [2024-12-08T18:40:10.753Z] 7738.25 IOPS, 30.23 MiB/s [2024-12-08T18:40:10.753Z] 7776.78 IOPS, 30.38 MiB/s [2024-12-08T18:40:10.753Z] [2024-12-08 18:40:00.077996] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:52.823 7813.26 IOPS, 30.52 MiB/s [2024-12-08T18:40:10.753Z] 7851.79 IOPS, 30.67 MiB/s [2024-12-08T18:40:10.753Z] 7888.38 IOPS, 30.81 MiB/s [2024-12-08T18:40:10.753Z] 7924.37 IOPS, 30.95 MiB/s [2024-12-08T18:40:10.753Z] 7956.00 IOPS, 31.08 MiB/s [2024-12-08T18:40:10.753Z] 7986.39 IOPS, 31.20 MiB/s [2024-12-08T18:40:10.753Z] 8015.12 IOPS, 31.31 MiB/s [2024-12-08T18:40:10.753Z] 8043.58 IOPS, 31.42 MiB/s [2024-12-08T18:40:10.753Z] 8071.44 IOPS, 31.53 MiB/s [2024-12-08T18:40:10.753Z] 8098.65 IOPS, 31.64 MiB/s [2024-12-08T18:40:10.753Z] Received shutdown signal, test time was about 55.572314 seconds 00:22:52.823 00:22:52.823 Latency(us) 00:22:52.823 [2024-12-08T18:40:10.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.824 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:52.824 Verification LBA range: start 0x0 length 0x4000 00:22:52.824 Nvme0n1 : 55.57 8112.10 31.69 0.00 0.00 15750.98 1042.62 7015926.69 00:22:52.824 [2024-12-08T18:40:10.754Z] =================================================================================================================== 00:22:52.824 [2024-12-08T18:40:10.754Z] Total : 8112.10 31.69 0.00 0.00 15750.98 1042.62 7015926.69 00:22:52.824 [2024-12-08 18:40:10.265702] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:22:53.083 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:22:53.083 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:53.083 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:22:53.083 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:53.083 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:22:53.083 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:53.083 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:22:53.083 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:53.083 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:53.083 rmmod nvme_tcp 00:22:53.083 rmmod nvme_fabrics 00:22:53.083 rmmod nvme_keyring 00:22:53.083 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:53.083 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:22:53.083 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:22:53.083 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@513 -- # '[' -n 95282 ']' 00:22:53.083 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@514 -- # killprocess 95282 00:22:53.083 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 95282 ']' 00:22:53.083 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 95282 00:22:53.083 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:22:53.083 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:53.083 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95282 00:22:53.083 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:53.083 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:53.083 killing process with pid 95282 00:22:53.083 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95282' 00:22:53.083 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 95282 00:22:53.083 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 95282 00:22:53.343 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:53.343 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:53.343 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:53.343 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:22:53.343 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # iptables-save 00:22:53.343 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:22:53.343 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:53.343 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:53.343 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:53.343 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:53.343 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:53.343 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:53.343 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:53.343 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:53.343 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:53.343 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:53.343 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:53.343 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:53.343 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:53.343 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:53.343 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:53.343 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:53.603 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:53.603 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.603 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:53.603 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.603 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:22:53.603 ************************************ 00:22:53.603 END TEST nvmf_host_multipath 00:22:53.603 ************************************ 00:22:53.603 00:22:53.603 real 1m1.347s 00:22:53.603 user 2m50.449s 00:22:53.603 sys 0m17.929s 00:22:53.603 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:53.603 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:53.603 18:40:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:53.603 18:40:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:53.603 18:40:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:53.603 18:40:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.603 ************************************ 00:22:53.603 START TEST nvmf_timeout 00:22:53.603 ************************************ 00:22:53.603 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:53.603 * Looking for test storage... 00:22:53.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:53.603 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:53.603 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:22:53.603 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:53.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.864 --rc genhtml_branch_coverage=1 00:22:53.864 --rc genhtml_function_coverage=1 00:22:53.864 --rc genhtml_legend=1 00:22:53.864 --rc geninfo_all_blocks=1 00:22:53.864 --rc geninfo_unexecuted_blocks=1 00:22:53.864 00:22:53.864 ' 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:53.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.864 --rc genhtml_branch_coverage=1 00:22:53.864 --rc genhtml_function_coverage=1 00:22:53.864 --rc genhtml_legend=1 00:22:53.864 --rc geninfo_all_blocks=1 00:22:53.864 --rc geninfo_unexecuted_blocks=1 00:22:53.864 00:22:53.864 ' 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:53.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.864 --rc genhtml_branch_coverage=1 00:22:53.864 --rc genhtml_function_coverage=1 00:22:53.864 --rc genhtml_legend=1 00:22:53.864 --rc geninfo_all_blocks=1 00:22:53.864 --rc geninfo_unexecuted_blocks=1 00:22:53.864 00:22:53.864 ' 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:53.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.864 --rc genhtml_branch_coverage=1 00:22:53.864 --rc genhtml_function_coverage=1 00:22:53.864 --rc genhtml_legend=1 00:22:53.864 --rc geninfo_all_blocks=1 00:22:53.864 --rc geninfo_unexecuted_blocks=1 00:22:53.864 00:22:53.864 ' 00:22:53.864 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:53.865 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@456 -- # nvmf_veth_init 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:53.865 Cannot find device "nvmf_init_br" 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:53.865 Cannot find device "nvmf_init_br2" 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:53.865 Cannot find device "nvmf_tgt_br" 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:53.865 Cannot find device "nvmf_tgt_br2" 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:53.865 Cannot find device "nvmf_init_br" 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:53.865 Cannot find device "nvmf_init_br2" 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:53.865 Cannot find device "nvmf_tgt_br" 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:53.865 Cannot find device "nvmf_tgt_br2" 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:53.865 Cannot find device "nvmf_br" 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:53.865 Cannot find device "nvmf_init_if" 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:22:53.865 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:53.866 Cannot find device "nvmf_init_if2" 00:22:53.866 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:22:53.866 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:53.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:53.866 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:22:53.866 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:53.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:53.866 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:22:53.866 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:53.866 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:53.866 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:53.866 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:53.866 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:53.866 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:53.866 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:53.866 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:53.866 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:53.866 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:54.125 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:54.125 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:54.125 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:54.125 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:54.125 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:54.125 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:54.125 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:54.125 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:54.125 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:54.125 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:54.125 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:54.125 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:54.125 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:54.125 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:54.125 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:54.125 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:54.125 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:54.125 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:54.125 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:54.125 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:54.125 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:54.125 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:54.125 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:54.125 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:54.125 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:22:54.125 00:22:54.125 --- 10.0.0.3 ping statistics --- 00:22:54.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.125 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:22:54.125 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:54.125 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:54.125 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:22:54.125 00:22:54.125 --- 10.0.0.4 ping statistics --- 00:22:54.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.125 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:22:54.125 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:54.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:22:54.126 00:22:54.126 --- 10.0.0.1 ping statistics --- 00:22:54.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.126 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:22:54.126 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:54.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:22:54.126 00:22:54.126 --- 10.0.0.2 ping statistics --- 00:22:54.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.126 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:22:54.126 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.126 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@457 -- # return 0 00:22:54.126 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:54.126 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.126 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:54.126 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:54.126 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.126 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:54.126 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:54.126 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:22:54.126 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:54.126 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:54.126 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:54.126 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@505 -- # nvmfpid=96506 00:22:54.126 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@506 -- # waitforlisten 96506 00:22:54.126 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:54.126 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 96506 ']' 00:22:54.126 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.126 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:54.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.126 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.126 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:54.126 18:40:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:54.126 [2024-12-08 18:40:12.033904] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:54.126 [2024-12-08 18:40:12.033985] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.385 [2024-12-08 18:40:12.176015] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:54.385 [2024-12-08 18:40:12.244348] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.385 [2024-12-08 18:40:12.244688] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.385 [2024-12-08 18:40:12.244855] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.385 [2024-12-08 18:40:12.245094] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.385 [2024-12-08 18:40:12.245140] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.385 [2024-12-08 18:40:12.245486] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.385 [2024-12-08 18:40:12.245504] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.385 [2024-12-08 18:40:12.302631] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:54.645 18:40:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:54.645 18:40:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:22:54.645 18:40:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:54.645 18:40:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:54.645 18:40:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:54.645 18:40:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.645 18:40:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:54.645 18:40:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:54.905 [2024-12-08 18:40:12.691922] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.905 18:40:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:55.164 Malloc0 00:22:55.164 18:40:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:55.422 18:40:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:55.679 18:40:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:55.938 [2024-12-08 18:40:13.610894] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:55.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:55.938 18:40:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:55.938 18:40:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96549 00:22:55.938 18:40:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96549 /var/tmp/bdevperf.sock 00:22:55.938 18:40:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 96549 ']' 00:22:55.938 18:40:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:55.938 18:40:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:55.938 18:40:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:55.938 18:40:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:55.938 18:40:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:55.938 [2024-12-08 18:40:13.664967] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:55.938 [2024-12-08 18:40:13.665052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96549 ] 00:22:55.938 [2024-12-08 18:40:13.797119] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.938 [2024-12-08 18:40:13.865072] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.197 [2024-12-08 18:40:13.918168] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:56.765 18:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:56.765 18:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:22:56.765 18:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:57.024 18:40:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:57.289 NVMe0n1 00:22:57.289 18:40:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96567 00:22:57.289 18:40:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:57.289 18:40:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:22:57.550 Running I/O for 10 seconds... 00:22:58.483 18:40:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:58.753 9320.00 IOPS, 36.41 MiB/s [2024-12-08T18:40:16.683Z] [2024-12-08 18:40:16.455984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.753 [2024-12-08 18:40:16.456038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.753 [2024-12-08 18:40:16.456067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.753 [2024-12-08 18:40:16.456090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.753 [2024-12-08 18:40:16.456099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.753 [2024-12-08 18:40:16.456107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.753 [2024-12-08 18:40:16.456116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.753 [2024-12-08 18:40:16.456124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.753 [2024-12-08 18:40:16.456133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799630 is same with the state(6) to be set 00:22:58.753 [2024-12-08 18:40:16.456185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.753 [2024-12-08 18:40:16.456205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.753 [2024-12-08 18:40:16.456222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.753 [2024-12-08 18:40:16.456231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.753 [2024-12-08 18:40:16.456240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.753 [2024-12-08 18:40:16.456248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.753 [2024-12-08 18:40:16.456258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.754 [2024-12-08 18:40:16.456265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.456275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.754 [2024-12-08 18:40:16.456283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.456292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.754 [2024-12-08 18:40:16.456299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.456309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.754 [2024-12-08 18:40:16.456322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.456332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.754 [2024-12-08 18:40:16.456350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.456359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.456367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.457005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.457039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.457053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.457061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.457072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.457082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.457109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.457117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.457127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.457135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.457144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.457152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.457162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.457170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.457179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:88000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.457191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.457203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.457669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.457686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:88016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.457695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.457705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:88024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.457715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.457725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.457747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.457758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:88040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.457766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.457775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:88048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.457784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.457793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:88056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.457801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.457810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:88064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.457824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.457834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:88072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.457842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.457851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:88080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.458274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.458300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:88088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.458312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.458322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:88096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.458331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.458342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:88104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.458350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.458360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.458383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.458393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:88120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.458401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.458411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.754 [2024-12-08 18:40:16.458448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.458459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.754 [2024-12-08 18:40:16.458561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.458578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.754 [2024-12-08 18:40:16.458586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.458596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.754 [2024-12-08 18:40:16.458605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.458615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.754 [2024-12-08 18:40:16.458722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.458738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.754 [2024-12-08 18:40:16.458747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.458757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.754 [2024-12-08 18:40:16.458765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.459054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.754 [2024-12-08 18:40:16.459142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.459156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:88128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.459165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.459175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:88136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.459183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.459193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:88144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.459201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.459211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.459228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.459387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:88160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.459687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.459714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.459724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.459827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:88176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.459841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.459852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:88184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.459860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.459871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:88192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.459879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.459890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:88200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.460048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.460147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:88208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.460162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.754 [2024-12-08 18:40:16.460173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:88216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.754 [2024-12-08 18:40:16.460182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.460192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.755 [2024-12-08 18:40:16.460200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.460435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:88232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.755 [2024-12-08 18:40:16.460462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.460474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.755 [2024-12-08 18:40:16.460483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.460493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.755 [2024-12-08 18:40:16.460501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.460511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.755 [2024-12-08 18:40:16.460519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.460529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.755 [2024-12-08 18:40:16.460537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.460547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.755 [2024-12-08 18:40:16.460555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.460565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.755 [2024-12-08 18:40:16.460574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.460584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.755 [2024-12-08 18:40:16.460592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.460602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.755 [2024-12-08 18:40:16.460610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.460620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.755 [2024-12-08 18:40:16.460628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.460642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.755 [2024-12-08 18:40:16.460649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.460659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:88256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.755 [2024-12-08 18:40:16.460667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.460676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.755 [2024-12-08 18:40:16.460685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.460695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.755 [2024-12-08 18:40:16.460711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.460720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.755 [2024-12-08 18:40:16.461141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.461169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:88288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.755 [2024-12-08 18:40:16.461178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.461189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.755 [2024-12-08 18:40:16.461197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.461208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.755 [2024-12-08 18:40:16.461217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.461227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.755 [2024-12-08 18:40:16.461235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.461245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:88320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.755 [2024-12-08 18:40:16.461254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.461264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:88328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.755 [2024-12-08 18:40:16.461272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.461282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:88336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.755 [2024-12-08 18:40:16.461290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.461414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.755 [2024-12-08 18:40:16.461432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.461443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:88352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.755 [2024-12-08 18:40:16.461453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.461463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:88360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.755 [2024-12-08 18:40:16.461471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.461481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:88368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.755 [2024-12-08 18:40:16.461589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.461604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:88376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.755 [2024-12-08 18:40:16.461612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.461622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.755 [2024-12-08 18:40:16.461631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.461642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:88392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.755 [2024-12-08 18:40:16.461747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.461764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:88400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.755 [2024-12-08 18:40:16.461773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.461783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:88408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.755 [2024-12-08 18:40:16.461792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.461812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:88416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.755 [2024-12-08 18:40:16.461820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.461830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:88424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.755 [2024-12-08 18:40:16.461898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.461912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:88432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.755 [2024-12-08 18:40:16.461921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.461931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:88440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.755 [2024-12-08 18:40:16.461939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.461949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.755 [2024-12-08 18:40:16.461958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.462179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.755 [2024-12-08 18:40:16.462202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.462214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.755 [2024-12-08 18:40:16.462224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.462236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.755 [2024-12-08 18:40:16.462255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.462267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.755 [2024-12-08 18:40:16.462275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.462285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.755 [2024-12-08 18:40:16.462294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.462304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.755 [2024-12-08 18:40:16.462312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.462322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.755 [2024-12-08 18:40:16.462342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.462428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.755 [2024-12-08 18:40:16.462441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.462451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.755 [2024-12-08 18:40:16.462475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.462485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.755 [2024-12-08 18:40:16.462619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.462712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.755 [2024-12-08 18:40:16.462725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.462735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.755 [2024-12-08 18:40:16.462743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.755 [2024-12-08 18:40:16.462754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:88872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.756 [2024-12-08 18:40:16.462762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.756 [2024-12-08 18:40:16.462772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.756 [2024-12-08 18:40:16.462780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.756 [2024-12-08 18:40:16.462867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.756 [2024-12-08 18:40:16.462878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.756 [2024-12-08 18:40:16.462888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:88448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.756 [2024-12-08 18:40:16.462896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.756 [2024-12-08 18:40:16.462906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:88456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.756 [2024-12-08 18:40:16.462913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.756 [2024-12-08 18:40:16.462923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.756 [2024-12-08 18:40:16.462932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.756 [2024-12-08 18:40:16.462942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:88472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.756 [2024-12-08 18:40:16.463031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.756 [2024-12-08 18:40:16.463045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:88480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.756 [2024-12-08 18:40:16.463054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.756 [2024-12-08 18:40:16.463064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:88488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.756 [2024-12-08 18:40:16.463072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.756 [2024-12-08 18:40:16.463332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.756 [2024-12-08 18:40:16.463352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.756 [2024-12-08 18:40:16.463364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:88504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.756 [2024-12-08 18:40:16.463373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.756 [2024-12-08 18:40:16.463393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:88512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.756 [2024-12-08 18:40:16.463437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.756 [2024-12-08 18:40:16.463451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:88520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.756 [2024-12-08 18:40:16.463460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.756 [2024-12-08 18:40:16.463470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:88528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.756 [2024-12-08 18:40:16.463479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.756 [2024-12-08 18:40:16.463490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.756 [2024-12-08 18:40:16.463498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.756 [2024-12-08 18:40:16.463517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:88544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.756 [2024-12-08 18:40:16.463599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.756 [2024-12-08 18:40:16.463613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.756 [2024-12-08 18:40:16.463622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.756 [2024-12-08 18:40:16.463632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:88560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.756 [2024-12-08 18:40:16.463640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.756 [2024-12-08 18:40:16.463650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:88568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.756 [2024-12-08 18:40:16.463761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.756 [2024-12-08 18:40:16.463776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.756 [2024-12-08 18:40:16.463784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.756 [2024-12-08 18:40:16.463806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.756 [2024-12-08 18:40:16.463815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.756 [2024-12-08 18:40:16.463826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.756 [2024-12-08 18:40:16.463835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.756 [2024-12-08 18:40:16.463845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.756 [2024-12-08 18:40:16.463942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.756 [2024-12-08 18:40:16.463956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.756 [2024-12-08 18:40:16.463965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.756 [2024-12-08 18:40:16.463975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.756 [2024-12-08 18:40:16.463984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.756 [2024-12-08 18:40:16.464272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.756 [2024-12-08 18:40:16.464295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.756 [2024-12-08 18:40:16.464327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.756 [2024-12-08 18:40:16.464337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.756 [2024-12-08 18:40:16.464345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88952 len:8 PRP1 0x0 PRP2 0x0 00:22:58.756 [2024-12-08 18:40:16.464353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.756 [2024-12-08 18:40:16.464849] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17ba670 was disconnected and freed. reset controller. 00:22:58.756 [2024-12-08 18:40:16.465252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:58.756 [2024-12-08 18:40:16.465296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1799630 (9): Bad file descriptor 00:22:58.756 [2024-12-08 18:40:16.465391] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.756 [2024-12-08 18:40:16.465527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1799630 with addr=10.0.0.3, port=4420 00:22:58.756 [2024-12-08 18:40:16.465541] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799630 is same with the state(6) to be set 00:22:58.756 [2024-12-08 18:40:16.465639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1799630 (9): Bad file descriptor 00:22:58.756 [2024-12-08 18:40:16.465661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:58.756 [2024-12-08 18:40:16.465670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:58.756 [2024-12-08 18:40:16.465680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:58.756 [2024-12-08 18:40:16.465701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.756 [2024-12-08 18:40:16.465711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:58.756 18:40:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:23:00.643 5496.00 IOPS, 21.47 MiB/s [2024-12-08T18:40:18.573Z] 3664.00 IOPS, 14.31 MiB/s [2024-12-08T18:40:18.573Z] [2024-12-08 18:40:18.466016] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.643 [2024-12-08 18:40:18.466076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1799630 with addr=10.0.0.3, port=4420 00:23:00.643 [2024-12-08 18:40:18.466089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799630 is same with the state(6) to be set 00:23:00.643 [2024-12-08 18:40:18.466108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1799630 (9): Bad file descriptor 00:23:00.643 [2024-12-08 18:40:18.466123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:00.643 [2024-12-08 18:40:18.466131] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:00.643 [2024-12-08 18:40:18.466141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:00.643 [2024-12-08 18:40:18.466160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:00.643 [2024-12-08 18:40:18.466170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:00.643 18:40:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:23:00.643 18:40:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:00.643 18:40:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:23:00.902 18:40:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:23:00.902 18:40:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:23:00.902 18:40:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:23:00.902 18:40:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:23:01.160 18:40:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:23:01.160 18:40:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:23:02.791 2748.00 IOPS, 10.73 MiB/s [2024-12-08T18:40:20.721Z] 2198.40 IOPS, 8.59 MiB/s [2024-12-08T18:40:20.721Z] [2024-12-08 18:40:20.466281] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.791 [2024-12-08 18:40:20.466347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1799630 with addr=10.0.0.3, port=4420 00:23:02.791 [2024-12-08 18:40:20.466363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799630 is same with the state(6) to be set 00:23:02.791 [2024-12-08 18:40:20.466385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1799630 (9): Bad file descriptor 00:23:02.792 [2024-12-08 18:40:20.466402] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:02.792 [2024-12-08 18:40:20.466410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:02.792 [2024-12-08 18:40:20.466433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:02.792 [2024-12-08 18:40:20.466458] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:02.792 [2024-12-08 18:40:20.466468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:04.755 1832.00 IOPS, 7.16 MiB/s [2024-12-08T18:40:22.685Z] 1570.29 IOPS, 6.13 MiB/s [2024-12-08T18:40:22.685Z] [2024-12-08 18:40:22.466498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:04.755 [2024-12-08 18:40:22.466535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:04.755 [2024-12-08 18:40:22.466560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:04.755 [2024-12-08 18:40:22.466568] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:04.755 [2024-12-08 18:40:22.466587] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.692 1374.00 IOPS, 5.37 MiB/s 00:23:05.692 Latency(us) 00:23:05.692 [2024-12-08T18:40:23.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.692 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:05.692 Verification LBA range: start 0x0 length 0x4000 00:23:05.692 NVMe0n1 : 8.17 1345.61 5.26 15.67 0.00 93896.58 2800.17 7046430.72 00:23:05.692 [2024-12-08T18:40:23.622Z] =================================================================================================================== 00:23:05.692 [2024-12-08T18:40:23.622Z] Total : 1345.61 5.26 15.67 0.00 93896.58 2800.17 7046430.72 00:23:05.692 { 00:23:05.692 "results": [ 00:23:05.692 { 00:23:05.692 "job": "NVMe0n1", 00:23:05.692 "core_mask": "0x4", 00:23:05.692 "workload": "verify", 00:23:05.692 "status": "finished", 00:23:05.692 "verify_range": { 00:23:05.692 "start": 0, 00:23:05.692 "length": 16384 00:23:05.692 }, 00:23:05.692 "queue_depth": 128, 00:23:05.692 "io_size": 4096, 00:23:05.692 "runtime": 8.168777, 00:23:05.692 "iops": 1345.6114666858944, 00:23:05.692 "mibps": 5.256294791741775, 00:23:05.692 "io_failed": 128, 00:23:05.692 "io_timeout": 0, 00:23:05.692 "avg_latency_us": 93896.58412034009, 00:23:05.692 "min_latency_us": 2800.1745454545453, 00:23:05.692 "max_latency_us": 7046430.72 00:23:05.692 } 00:23:05.692 ], 00:23:05.692 "core_count": 1 00:23:05.692 } 00:23:06.274 18:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:23:06.274 18:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:06.274 18:40:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:23:06.532 18:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:23:06.533 18:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:23:06.533 18:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:23:06.533 18:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:23:06.791 18:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:23:06.791 18:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 96567 00:23:06.791 18:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96549 00:23:06.791 18:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 96549 ']' 00:23:06.791 18:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 96549 00:23:06.791 18:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:23:06.791 18:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:06.791 18:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96549 00:23:06.791 killing process with pid 96549 00:23:06.791 Received shutdown signal, test time was about 9.236545 seconds 00:23:06.791 00:23:06.791 Latency(us) 00:23:06.791 [2024-12-08T18:40:24.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.791 [2024-12-08T18:40:24.721Z] =================================================================================================================== 00:23:06.791 [2024-12-08T18:40:24.721Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:06.791 18:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:06.791 18:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:06.791 18:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96549' 00:23:06.791 18:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 96549 00:23:06.791 18:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 96549 00:23:06.791 18:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:07.049 [2024-12-08 18:40:24.909202] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:07.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.049 18:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=96692 00:23:07.049 18:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:23:07.049 18:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 96692 /var/tmp/bdevperf.sock 00:23:07.049 18:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 96692 ']' 00:23:07.049 18:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.049 18:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:07.049 18:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.049 18:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:07.049 18:40:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:07.324 [2024-12-08 18:40:24.983023] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:07.325 [2024-12-08 18:40:24.983132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96692 ] 00:23:07.325 [2024-12-08 18:40:25.120821] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.325 [2024-12-08 18:40:25.187777] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.325 [2024-12-08 18:40:25.241325] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:07.583 18:40:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:07.583 18:40:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:23:07.583 18:40:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:07.583 18:40:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:23:08.150 NVMe0n1 00:23:08.150 18:40:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:08.150 18:40:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=96708 00:23:08.150 18:40:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:23:08.150 Running I/O for 10 seconds... 00:23:09.087 18:40:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:09.350 8596.00 IOPS, 33.58 MiB/s [2024-12-08T18:40:27.280Z] [2024-12-08 18:40:27.098851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.098912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.098922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.098930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.098937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.098945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.098953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.098960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.098968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.098975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.098982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.098989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.098997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.099004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.099011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.099017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.099024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.099031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.099038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.099045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.099053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.099059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.099066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.099073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.099079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.099086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.099094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.099104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.099111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.099117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.099124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.099131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.099139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.099148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.099155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.099162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.099172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.099179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.099186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.099196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.350 [2024-12-08 18:40:27.099203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.099566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314d50 is same with the state(6) to be set 00:23:09.351 [2024-12-08 18:40:27.100968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.351 [2024-12-08 18:40:27.101037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.351 [2024-12-08 18:40:27.101058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.351 [2024-12-08 18:40:27.101069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.351 [2024-12-08 18:40:27.101080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.351 [2024-12-08 18:40:27.101089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.351 [2024-12-08 18:40:27.101099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.351 [2024-12-08 18:40:27.101107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.351 [2024-12-08 18:40:27.101117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.351 [2024-12-08 18:40:27.101125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.351 [2024-12-08 18:40:27.101135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.351 [2024-12-08 18:40:27.101143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.351 [2024-12-08 18:40:27.101153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.351 [2024-12-08 18:40:27.101161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.351 [2024-12-08 18:40:27.101171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.351 [2024-12-08 18:40:27.101179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.351 [2024-12-08 18:40:27.101189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.351 [2024-12-08 18:40:27.101197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.351 [2024-12-08 18:40:27.101207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.351 [2024-12-08 18:40:27.101215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.351 [2024-12-08 18:40:27.101225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.351 [2024-12-08 18:40:27.101233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.351 [2024-12-08 18:40:27.101243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.351 [2024-12-08 18:40:27.101252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.351 [2024-12-08 18:40:27.101262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.351 [2024-12-08 18:40:27.101271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.351 [2024-12-08 18:40:27.101735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.351 [2024-12-08 18:40:27.101750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.351 [2024-12-08 18:40:27.101761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.351 [2024-12-08 18:40:27.101770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.351 [2024-12-08 18:40:27.101782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.351 [2024-12-08 18:40:27.101806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.351 [2024-12-08 18:40:27.101816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.101826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.101851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.101873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.101883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.101891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.102048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.102155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.102168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.102176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.102186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.102257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.102274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.102283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.102294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.102302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.102312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.102320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.102396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.102438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.102450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.102459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.102470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.102478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.102488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.102497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.102507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.102515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.102526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.102534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.102927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.102952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.102965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.102975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.102986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.102994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.103004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.103012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.103021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.103029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.103039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.103047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.103057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.103065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.103075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.103098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.103458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.103480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.103491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.103500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.103510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.103519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.103529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.103537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.103548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.103557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.103567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.103575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.103585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.103594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.103604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.103613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.103623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.103631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.103642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.103651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.103662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.103671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.103681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.103689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.103700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.103708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.103720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.103729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.103740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.103748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.103759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.103768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.103778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.352 [2024-12-08 18:40:27.103787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.352 [2024-12-08 18:40:27.103824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.103834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.103845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.103854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.103865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.103874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.103885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.103894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.103905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.103914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.103925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.103934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.103945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.103954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.103966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.103975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.103986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.103996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.104015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.104035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.104054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.104089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.104138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.104156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.104174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.104192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.104215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.104233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.104250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.104268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.104286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.104304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.104322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.104340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.104358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.104375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.104392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.104419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.104439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.104457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.104488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.104506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.104525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.104543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.353 [2024-12-08 18:40:27.104561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.353 [2024-12-08 18:40:27.104579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.353 [2024-12-08 18:40:27.104604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.353 [2024-12-08 18:40:27.104622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.353 [2024-12-08 18:40:27.104632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.353 [2024-12-08 18:40:27.104640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.354 [2024-12-08 18:40:27.104650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.354 [2024-12-08 18:40:27.104658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.354 [2024-12-08 18:40:27.104667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.354 [2024-12-08 18:40:27.104675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.354 [2024-12-08 18:40:27.104685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.354 [2024-12-08 18:40:27.104693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.354 [2024-12-08 18:40:27.104702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.354 [2024-12-08 18:40:27.104710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.354 [2024-12-08 18:40:27.104720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.354 [2024-12-08 18:40:27.104733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.354 [2024-12-08 18:40:27.104743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.354 [2024-12-08 18:40:27.104751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.354 [2024-12-08 18:40:27.104761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.354 [2024-12-08 18:40:27.104770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.354 [2024-12-08 18:40:27.104780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.354 [2024-12-08 18:40:27.104788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.354 [2024-12-08 18:40:27.104798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.354 [2024-12-08 18:40:27.104806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.354 [2024-12-08 18:40:27.104816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.354 [2024-12-08 18:40:27.104824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.354 [2024-12-08 18:40:27.104834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.354 [2024-12-08 18:40:27.104842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.354 [2024-12-08 18:40:27.104851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.354 [2024-12-08 18:40:27.104859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.354 [2024-12-08 18:40:27.104869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b99c0 is same with the state(6) to be set 00:23:09.354 [2024-12-08 18:40:27.104880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.354 [2024-12-08 18:40:27.104891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.354 [2024-12-08 18:40:27.104899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80032 len:8 PRP1 0x0 PRP2 0x0 00:23:09.354 [2024-12-08 18:40:27.104907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.354 [2024-12-08 18:40:27.104916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.354 [2024-12-08 18:40:27.104923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.354 [2024-12-08 18:40:27.104930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80160 len:8 PRP1 0x0 PRP2 0x0 00:23:09.354 [2024-12-08 18:40:27.104938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.354 [2024-12-08 18:40:27.104946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.354 [2024-12-08 18:40:27.104953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.354 [2024-12-08 18:40:27.104960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80168 len:8 PRP1 0x0 PRP2 0x0 00:23:09.354 [2024-12-08 18:40:27.104967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.354 [2024-12-08 18:40:27.104976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.354 [2024-12-08 18:40:27.104982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.354 [2024-12-08 18:40:27.104989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80176 len:8 PRP1 0x0 PRP2 0x0 00:23:09.354 [2024-12-08 18:40:27.104997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.354 [2024-12-08 18:40:27.105010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.354 [2024-12-08 18:40:27.105016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.354 [2024-12-08 18:40:27.105023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80184 len:8 PRP1 0x0 PRP2 0x0 00:23:09.354 [2024-12-08 18:40:27.105031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.354 [2024-12-08 18:40:27.105039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.354 [2024-12-08 18:40:27.105045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.354 [2024-12-08 18:40:27.105052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80192 len:8 PRP1 0x0 PRP2 0x0 00:23:09.354 [2024-12-08 18:40:27.105059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.354 [2024-12-08 18:40:27.105067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.354 [2024-12-08 18:40:27.105073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.354 [2024-12-08 18:40:27.105080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80200 len:8 PRP1 0x0 PRP2 0x0 00:23:09.354 [2024-12-08 18:40:27.105088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.354 [2024-12-08 18:40:27.105096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.354 [2024-12-08 18:40:27.105102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.354 [2024-12-08 18:40:27.105109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80208 len:8 PRP1 0x0 PRP2 0x0 00:23:09.354 [2024-12-08 18:40:27.105117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.354 [2024-12-08 18:40:27.105125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.354 [2024-12-08 18:40:27.105136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.354 [2024-12-08 18:40:27.105144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80216 len:8 PRP1 0x0 PRP2 0x0 00:23:09.354 [2024-12-08 18:40:27.105151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.354 [2024-12-08 18:40:27.105160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.354 [2024-12-08 18:40:27.105166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.354 [2024-12-08 18:40:27.105173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80224 len:8 PRP1 0x0 PRP2 0x0 00:23:09.354 [2024-12-08 18:40:27.105181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.354 [2024-12-08 18:40:27.105190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.354 [2024-12-08 18:40:27.105196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.354 [2024-12-08 18:40:27.105203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80232 len:8 PRP1 0x0 PRP2 0x0 00:23:09.354 [2024-12-08 18:40:27.105210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.354 [2024-12-08 18:40:27.105218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.354 [2024-12-08 18:40:27.105225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.354 [2024-12-08 18:40:27.105232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80240 len:8 PRP1 0x0 PRP2 0x0 00:23:09.354 [2024-12-08 18:40:27.105239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.354 [2024-12-08 18:40:27.105252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.354 [2024-12-08 18:40:27.105258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.354 [2024-12-08 18:40:27.105265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80248 len:8 PRP1 0x0 PRP2 0x0 00:23:09.354 [2024-12-08 18:40:27.105273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.354 [2024-12-08 18:40:27.105281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.355 [2024-12-08 18:40:27.105288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.355 [2024-12-08 18:40:27.105294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80256 len:8 PRP1 0x0 PRP2 0x0 00:23:09.355 [2024-12-08 18:40:27.105302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.355 [2024-12-08 18:40:27.105311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.355 [2024-12-08 18:40:27.105317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.355 [2024-12-08 18:40:27.105324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80264 len:8 PRP1 0x0 PRP2 0x0 00:23:09.355 [2024-12-08 18:40:27.105332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.355 [2024-12-08 18:40:27.105340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.355 [2024-12-08 18:40:27.105347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.355 [2024-12-08 18:40:27.105353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80272 len:8 PRP1 0x0 PRP2 0x0 00:23:09.355 [2024-12-08 18:40:27.105361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.355 [2024-12-08 18:40:27.105369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.355 [2024-12-08 18:40:27.105379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.355 [2024-12-08 18:40:27.105386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80280 len:8 PRP1 0x0 PRP2 0x0 00:23:09.355 [2024-12-08 18:40:27.105394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.355 [2024-12-08 18:40:27.105426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.355 [2024-12-08 18:40:27.105435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.355 [2024-12-08 18:40:27.105442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80288 len:8 PRP1 0x0 PRP2 0x0 00:23:09.355 [2024-12-08 18:40:27.105450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.355 [2024-12-08 18:40:27.105459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.355 [2024-12-08 18:40:27.105465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.355 [2024-12-08 18:40:27.105472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80296 len:8 PRP1 0x0 PRP2 0x0 00:23:09.355 [2024-12-08 18:40:27.105480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.355 [2024-12-08 18:40:27.105490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.355 [2024-12-08 18:40:27.105496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.355 [2024-12-08 18:40:27.105503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80304 len:8 PRP1 0x0 PRP2 0x0 00:23:09.355 [2024-12-08 18:40:27.105511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.355 [2024-12-08 18:40:27.105567] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7b99c0 was disconnected and freed. reset controller. 00:23:09.355 [2024-12-08 18:40:27.105669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.355 [2024-12-08 18:40:27.105686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.355 [2024-12-08 18:40:27.105697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.355 [2024-12-08 18:40:27.105706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.355 [2024-12-08 18:40:27.105715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.355 [2024-12-08 18:40:27.105723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.355 [2024-12-08 18:40:27.105732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.355 [2024-12-08 18:40:27.105740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.355 [2024-12-08 18:40:27.105749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7988b0 is same with the state(6) to be set 00:23:09.355 [2024-12-08 18:40:27.105950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.355 [2024-12-08 18:40:27.105969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7988b0 (9): Bad file descriptor 00:23:09.355 [2024-12-08 18:40:27.106055] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.355 [2024-12-08 18:40:27.106075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7988b0 with addr=10.0.0.3, port=4420 00:23:09.355 [2024-12-08 18:40:27.106085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7988b0 is same with the state(6) to be set 00:23:09.355 [2024-12-08 18:40:27.106101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7988b0 (9): Bad file descriptor 00:23:09.355 [2024-12-08 18:40:27.106115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.355 [2024-12-08 18:40:27.106130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.355 [2024-12-08 18:40:27.106141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.355 [2024-12-08 18:40:27.106160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.355 [2024-12-08 18:40:27.106170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.355 18:40:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:23:10.293 4955.50 IOPS, 19.36 MiB/s [2024-12-08T18:40:28.223Z] [2024-12-08 18:40:28.106262] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.293 [2024-12-08 18:40:28.106307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7988b0 with addr=10.0.0.3, port=4420 00:23:10.293 [2024-12-08 18:40:28.106335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7988b0 is same with the state(6) to be set 00:23:10.293 [2024-12-08 18:40:28.106354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7988b0 (9): Bad file descriptor 00:23:10.293 [2024-12-08 18:40:28.106369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.293 [2024-12-08 18:40:28.106378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.293 [2024-12-08 18:40:28.106387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.293 [2024-12-08 18:40:28.106407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.293 [2024-12-08 18:40:28.106444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.293 18:40:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:10.553 [2024-12-08 18:40:28.313570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:10.553 18:40:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 96708 00:23:11.377 3303.67 IOPS, 12.90 MiB/s [2024-12-08T18:40:29.307Z] [2024-12-08 18:40:29.120904] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:13.246 2477.75 IOPS, 9.68 MiB/s [2024-12-08T18:40:32.112Z] 3629.20 IOPS, 14.18 MiB/s [2024-12-08T18:40:33.051Z] 4703.67 IOPS, 18.37 MiB/s [2024-12-08T18:40:33.990Z] 5470.57 IOPS, 21.37 MiB/s [2024-12-08T18:40:35.372Z] 6057.75 IOPS, 23.66 MiB/s [2024-12-08T18:40:36.308Z] 6510.44 IOPS, 25.43 MiB/s [2024-12-08T18:40:36.308Z] 6880.60 IOPS, 26.88 MiB/s 00:23:18.378 Latency(us) 00:23:18.378 [2024-12-08T18:40:36.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.378 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:18.378 Verification LBA range: start 0x0 length 0x4000 00:23:18.378 NVMe0n1 : 10.01 6886.81 26.90 0.00 0.00 18562.77 1199.01 3035150.89 00:23:18.378 [2024-12-08T18:40:36.308Z] =================================================================================================================== 00:23:18.378 [2024-12-08T18:40:36.308Z] Total : 6886.81 26.90 0.00 0.00 18562.77 1199.01 3035150.89 00:23:18.378 { 00:23:18.378 "results": [ 00:23:18.378 { 00:23:18.378 "job": "NVMe0n1", 00:23:18.378 "core_mask": "0x4", 00:23:18.378 "workload": "verify", 00:23:18.378 "status": "finished", 00:23:18.378 "verify_range": { 00:23:18.378 "start": 0, 00:23:18.378 "length": 16384 00:23:18.378 }, 00:23:18.378 "queue_depth": 128, 00:23:18.378 "io_size": 4096, 00:23:18.378 "runtime": 10.009573, 00:23:18.378 "iops": 6886.807259410566, 00:23:18.378 "mibps": 26.901590857072524, 00:23:18.378 "io_failed": 0, 00:23:18.378 "io_timeout": 0, 00:23:18.378 "avg_latency_us": 18562.766218227185, 00:23:18.378 "min_latency_us": 1199.010909090909, 00:23:18.378 "max_latency_us": 3035150.8945454545 00:23:18.378 } 00:23:18.378 ], 00:23:18.378 "core_count": 1 00:23:18.378 } 00:23:18.378 18:40:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=96811 00:23:18.378 18:40:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:18.378 18:40:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:23:18.378 Running I/O for 10 seconds... 00:23:19.314 18:40:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:19.314 8191.00 IOPS, 32.00 MiB/s [2024-12-08T18:40:37.244Z] [2024-12-08 18:40:37.238073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.314 [2024-12-08 18:40:37.238136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.314 [2024-12-08 18:40:37.238166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.314 [2024-12-08 18:40:37.238176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.314 [2024-12-08 18:40:37.238186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.314 [2024-12-08 18:40:37.238194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.314 [2024-12-08 18:40:37.238204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.314 [2024-12-08 18:40:37.238212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.314 [2024-12-08 18:40:37.238221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7988b0 is same with the state(6) to be set 00:23:19.314 [2024-12-08 18:40:37.238792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.314 [2024-12-08 18:40:37.238824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.314 [2024-12-08 18:40:37.238846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.314 [2024-12-08 18:40:37.238856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.314 [2024-12-08 18:40:37.238869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.314 [2024-12-08 18:40:37.238878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.238889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.238907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.238934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.238943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.238953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.238963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.238974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.238982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.238992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.239001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.239011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.239464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.239496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.239508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.239520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.239529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.239540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.239564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.239575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.239584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.239594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.239603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.239614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.239622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.239632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.239786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.240096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.240216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.240232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.240242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.240254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.240263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.240274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.240284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.240294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.240303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.240313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.240322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.240333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.240341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.240473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.240486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.240497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.240511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.240521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.240901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.240930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.240941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.240954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.240963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.240973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.240982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.240992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.241001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.241011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.241020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.241030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.241039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.241049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.241192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.241278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.241290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.241301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.241309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.241319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.241329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.241446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.241463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.241475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.241485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.241576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.241592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.315 [2024-12-08 18:40:37.241604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.315 [2024-12-08 18:40:37.241614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.316 [2024-12-08 18:40:37.241625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.316 [2024-12-08 18:40:37.241634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.316 [2024-12-08 18:40:37.241734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.316 [2024-12-08 18:40:37.241750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.316 [2024-12-08 18:40:37.241762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.316 [2024-12-08 18:40:37.241772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.316 [2024-12-08 18:40:37.241783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.316 [2024-12-08 18:40:37.241995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.316 [2024-12-08 18:40:37.242012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.316 [2024-12-08 18:40:37.242022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.316 [2024-12-08 18:40:37.242033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.316 [2024-12-08 18:40:37.242041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.316 [2024-12-08 18:40:37.242052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.316 [2024-12-08 18:40:37.242061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.316 [2024-12-08 18:40:37.242072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.316 [2024-12-08 18:40:37.242192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.316 [2024-12-08 18:40:37.242205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.316 [2024-12-08 18:40:37.242214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.316 [2024-12-08 18:40:37.242360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.316 [2024-12-08 18:40:37.242479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.316 [2024-12-08 18:40:37.242493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.316 [2024-12-08 18:40:37.242502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.316 [2024-12-08 18:40:37.242513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.242642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.242902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.242916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.242927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.242937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.242947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.242956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.242966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.242975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.242985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.242994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.243118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.243134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.243241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.243251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.243262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.243271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.243516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.243530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.243541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.243551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.243678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.243699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.243978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.243993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.244004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.244013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.244024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.244034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.244044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.244054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.244065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.244075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.244319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:75280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.244336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.244347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.244356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.244586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.244611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.244624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.244634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.244645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.244654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.244665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.244674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.244684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.244693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.244704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.244713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.244724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.244956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.244974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.244983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.244994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.245129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.245221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.245233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.245243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.245252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.245263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.245273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.245543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.245620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.245634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.245646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.245657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.245666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.245677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.577 [2024-12-08 18:40:37.245701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.577 [2024-12-08 18:40:37.245712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.578 [2024-12-08 18:40:37.245831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.245851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.578 [2024-12-08 18:40:37.245861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.246125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.578 [2024-12-08 18:40:37.246147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.246420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.578 [2024-12-08 18:40:37.246523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.246537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.578 [2024-12-08 18:40:37.246548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.246558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.578 [2024-12-08 18:40:37.246568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.246578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.578 [2024-12-08 18:40:37.246587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.246597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.578 [2024-12-08 18:40:37.246606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.246616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.578 [2024-12-08 18:40:37.246630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.246723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.578 [2024-12-08 18:40:37.246735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.246745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.578 [2024-12-08 18:40:37.246755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.246855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.578 [2024-12-08 18:40:37.246870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.246882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.578 [2024-12-08 18:40:37.246891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.246901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.578 [2024-12-08 18:40:37.247143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.247158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.578 [2024-12-08 18:40:37.247168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.247178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.578 [2024-12-08 18:40:37.247187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.247205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.578 [2024-12-08 18:40:37.247214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.247224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.578 [2024-12-08 18:40:37.247233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.247244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.578 [2024-12-08 18:40:37.247329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.247347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.578 [2024-12-08 18:40:37.247357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.247367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.578 [2024-12-08 18:40:37.247376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.247386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.578 [2024-12-08 18:40:37.247395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.247514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.578 [2024-12-08 18:40:37.247528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.247539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.578 [2024-12-08 18:40:37.247548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.247771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.578 [2024-12-08 18:40:37.247807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.247821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.578 [2024-12-08 18:40:37.247830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.247841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.578 [2024-12-08 18:40:37.247851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.247862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.578 [2024-12-08 18:40:37.247871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.248102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.578 [2024-12-08 18:40:37.248123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.248137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.578 [2024-12-08 18:40:37.248147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.248158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.578 [2024-12-08 18:40:37.248167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.248178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.578 [2024-12-08 18:40:37.248453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.248475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.578 [2024-12-08 18:40:37.248485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.248495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.578 [2024-12-08 18:40:37.248504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.248515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.578 [2024-12-08 18:40:37.248524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.248781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:74704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.578 [2024-12-08 18:40:37.248794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.248805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.578 [2024-12-08 18:40:37.248814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.248952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.578 [2024-12-08 18:40:37.249091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.249211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.578 [2024-12-08 18:40:37.249230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.249243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.578 [2024-12-08 18:40:37.249375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.578 [2024-12-08 18:40:37.249389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.579 [2024-12-08 18:40:37.249660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.579 [2024-12-08 18:40:37.249675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b83d0 is same with the state(6) to be set 00:23:19.579 [2024-12-08 18:40:37.249945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.579 [2024-12-08 18:40:37.249963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.579 [2024-12-08 18:40:37.250097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75632 len:8 PRP1 0x0 PRP2 0x0 00:23:19.579 [2024-12-08 18:40:37.250216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.579 [2024-12-08 18:40:37.250386] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7b83d0 was disconnected and freed. reset controller. 00:23:19.579 [2024-12-08 18:40:37.250529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7988b0 (9): Bad file descriptor 00:23:19.579 [2024-12-08 18:40:37.251012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:19.579 [2024-12-08 18:40:37.251307] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.579 [2024-12-08 18:40:37.251340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7988b0 with addr=10.0.0.3, port=4420 00:23:19.579 [2024-12-08 18:40:37.251353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7988b0 is same with the state(6) to be set 00:23:19.579 [2024-12-08 18:40:37.251373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7988b0 (9): Bad file descriptor 00:23:19.579 [2024-12-08 18:40:37.251389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:19.579 [2024-12-08 18:40:37.251398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:19.579 [2024-12-08 18:40:37.251664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:19.579 [2024-12-08 18:40:37.251690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.579 [2024-12-08 18:40:37.251965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:19.579 18:40:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:23:20.535 4663.50 IOPS, 18.22 MiB/s [2024-12-08T18:40:38.465Z] [2024-12-08 18:40:38.252062] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.535 [2024-12-08 18:40:38.252138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7988b0 with addr=10.0.0.3, port=4420 00:23:20.535 [2024-12-08 18:40:38.252182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7988b0 is same with the state(6) to be set 00:23:20.535 [2024-12-08 18:40:38.252201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7988b0 (9): Bad file descriptor 00:23:20.535 [2024-12-08 18:40:38.252216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:20.535 [2024-12-08 18:40:38.252224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:20.535 [2024-12-08 18:40:38.252233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:20.535 [2024-12-08 18:40:38.252252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.535 [2024-12-08 18:40:38.252261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:21.527 3109.00 IOPS, 12.14 MiB/s [2024-12-08T18:40:39.457Z] [2024-12-08 18:40:39.252334] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.527 [2024-12-08 18:40:39.252408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7988b0 with addr=10.0.0.3, port=4420 00:23:21.527 [2024-12-08 18:40:39.252448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7988b0 is same with the state(6) to be set 00:23:21.527 [2024-12-08 18:40:39.252467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7988b0 (9): Bad file descriptor 00:23:21.527 [2024-12-08 18:40:39.252483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:21.527 [2024-12-08 18:40:39.252491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:21.527 [2024-12-08 18:40:39.252500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:21.527 [2024-12-08 18:40:39.252518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.527 [2024-12-08 18:40:39.252528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:22.464 2331.75 IOPS, 9.11 MiB/s [2024-12-08T18:40:40.394Z] [2024-12-08 18:40:40.254600] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.464 [2024-12-08 18:40:40.254675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7988b0 with addr=10.0.0.3, port=4420 00:23:22.464 [2024-12-08 18:40:40.254689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7988b0 is same with the state(6) to be set 00:23:22.464 [2024-12-08 18:40:40.255038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7988b0 (9): Bad file descriptor 00:23:22.464 [2024-12-08 18:40:40.255504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:22.464 [2024-12-08 18:40:40.255532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:22.464 [2024-12-08 18:40:40.255543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:22.464 18:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:22.464 [2024-12-08 18:40:40.259597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:22.464 [2024-12-08 18:40:40.259631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:22.723 [2024-12-08 18:40:40.506982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:22.723 18:40:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 96811 00:23:23.549 1865.40 IOPS, 7.29 MiB/s [2024-12-08T18:40:41.479Z] [2024-12-08 18:40:41.293312] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:25.425 2932.83 IOPS, 11.46 MiB/s [2024-12-08T18:40:44.290Z] 3979.00 IOPS, 15.54 MiB/s [2024-12-08T18:40:45.241Z] 4768.12 IOPS, 18.63 MiB/s [2024-12-08T18:40:46.184Z] 5378.11 IOPS, 21.01 MiB/s [2024-12-08T18:40:46.184Z] 5866.90 IOPS, 22.92 MiB/s 00:23:28.254 Latency(us) 00:23:28.254 [2024-12-08T18:40:46.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.254 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:28.254 Verification LBA range: start 0x0 length 0x4000 00:23:28.254 NVMe0n1 : 10.01 5871.95 22.94 4376.66 0.00 12455.71 521.31 3019898.88 00:23:28.254 [2024-12-08T18:40:46.184Z] =================================================================================================================== 00:23:28.254 [2024-12-08T18:40:46.184Z] Total : 5871.95 22.94 4376.66 0.00 12455.71 0.00 3019898.88 00:23:28.254 { 00:23:28.254 "results": [ 00:23:28.254 { 00:23:28.254 "job": "NVMe0n1", 00:23:28.254 "core_mask": "0x4", 00:23:28.254 "workload": "verify", 00:23:28.254 "status": "finished", 00:23:28.254 "verify_range": { 00:23:28.254 "start": 0, 00:23:28.254 "length": 16384 00:23:28.254 }, 00:23:28.254 "queue_depth": 128, 00:23:28.254 "io_size": 4096, 00:23:28.254 "runtime": 10.008772, 00:23:28.254 "iops": 5871.949126226474, 00:23:28.254 "mibps": 22.937301274322163, 00:23:28.254 "io_failed": 43805, 00:23:28.254 "io_timeout": 0, 00:23:28.254 "avg_latency_us": 12455.714994398832, 00:23:28.254 "min_latency_us": 521.3090909090909, 00:23:28.254 "max_latency_us": 3019898.88 00:23:28.254 } 00:23:28.254 ], 00:23:28.254 "core_count": 1 00:23:28.254 } 00:23:28.254 18:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 96692 00:23:28.254 18:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 96692 ']' 00:23:28.254 18:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 96692 00:23:28.254 18:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:23:28.254 18:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:28.254 18:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96692 00:23:28.254 killing process with pid 96692 00:23:28.254 Received shutdown signal, test time was about 10.000000 seconds 00:23:28.254 00:23:28.254 Latency(us) 00:23:28.254 [2024-12-08T18:40:46.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.254 [2024-12-08T18:40:46.184Z] =================================================================================================================== 00:23:28.254 [2024-12-08T18:40:46.184Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.254 18:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:28.254 18:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:28.254 18:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96692' 00:23:28.254 18:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 96692 00:23:28.254 18:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 96692 00:23:28.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.512 18:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=96927 00:23:28.512 18:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:23:28.512 18:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 96927 /var/tmp/bdevperf.sock 00:23:28.512 18:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 96927 ']' 00:23:28.512 18:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.512 18:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:28.513 18:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.513 18:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:28.513 18:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:28.513 [2024-12-08 18:40:46.417763] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:28.513 [2024-12-08 18:40:46.417870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96927 ] 00:23:28.770 [2024-12-08 18:40:46.556333] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.770 [2024-12-08 18:40:46.613722] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.770 [2024-12-08 18:40:46.666248] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:29.028 18:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:29.028 18:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:23:29.028 18:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=96934 00:23:29.028 18:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96927 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:23:29.028 18:40:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:23:29.287 18:40:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:29.545 NVMe0n1 00:23:29.545 18:40:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:29.545 18:40:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=96972 00:23:29.545 18:40:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:23:29.545 Running I/O for 10 seconds... 00:23:30.480 18:40:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:30.744 18669.00 IOPS, 72.93 MiB/s [2024-12-08T18:40:48.674Z] [2024-12-08 18:40:48.520580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.744 [2024-12-08 18:40:48.520647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.744 [2024-12-08 18:40:48.520657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.744 [2024-12-08 18:40:48.520664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.744 [2024-12-08 18:40:48.520672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.744 [2024-12-08 18:40:48.520679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.744 [2024-12-08 18:40:48.520686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.744 [2024-12-08 18:40:48.520693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.744 [2024-12-08 18:40:48.520701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.744 [2024-12-08 18:40:48.520709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.744 [2024-12-08 18:40:48.520716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.744 [2024-12-08 18:40:48.520723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.744 [2024-12-08 18:40:48.520730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.744 [2024-12-08 18:40:48.520737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.744 [2024-12-08 18:40:48.520744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.744 [2024-12-08 18:40:48.520750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.744 [2024-12-08 18:40:48.520757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.520998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.745 [2024-12-08 18:40:48.521355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.746 [2024-12-08 18:40:48.521362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.746 [2024-12-08 18:40:48.521369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.746 [2024-12-08 18:40:48.521376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.746 [2024-12-08 18:40:48.521383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.746 [2024-12-08 18:40:48.521389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.746 [2024-12-08 18:40:48.521396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.746 [2024-12-08 18:40:48.521416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.746 [2024-12-08 18:40:48.521424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.746 [2024-12-08 18:40:48.521435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.746 [2024-12-08 18:40:48.521442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.746 [2024-12-08 18:40:48.521449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.746 [2024-12-08 18:40:48.521456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.746 [2024-12-08 18:40:48.521464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.746 [2024-12-08 18:40:48.521471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.746 [2024-12-08 18:40:48.521478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.746 [2024-12-08 18:40:48.521485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.746 [2024-12-08 18:40:48.521492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.746 [2024-12-08 18:40:48.521498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.746 [2024-12-08 18:40:48.521506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.746 [2024-12-08 18:40:48.521512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.746 [2024-12-08 18:40:48.521518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.746 [2024-12-08 18:40:48.521525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.746 [2024-12-08 18:40:48.521532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.746 [2024-12-08 18:40:48.521539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.746 [2024-12-08 18:40:48.521546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1312b60 is same with the state(6) to be set 00:23:30.746 [2024-12-08 18:40:48.521623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.746 [2024-12-08 18:40:48.521654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.746 [2024-12-08 18:40:48.521675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:118872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.746 [2024-12-08 18:40:48.521685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.746 [2024-12-08 18:40:48.521696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.746 [2024-12-08 18:40:48.521704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.746 [2024-12-08 18:40:48.521714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.746 [2024-12-08 18:40:48.521722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.746 [2024-12-08 18:40:48.521732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.746 [2024-12-08 18:40:48.521741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.746 [2024-12-08 18:40:48.521750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.746 [2024-12-08 18:40:48.521758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.746 [2024-12-08 18:40:48.521783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.746 [2024-12-08 18:40:48.521806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.746 [2024-12-08 18:40:48.521816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:118496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.746 [2024-12-08 18:40:48.521824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.746 [2024-12-08 18:40:48.521833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.746 [2024-12-08 18:40:48.521840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.746 [2024-12-08 18:40:48.521850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.746 [2024-12-08 18:40:48.521857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.746 [2024-12-08 18:40:48.521867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.746 [2024-12-08 18:40:48.521874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.746 [2024-12-08 18:40:48.522782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.746 [2024-12-08 18:40:48.522805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.746 [2024-12-08 18:40:48.522816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.746 [2024-12-08 18:40:48.522825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.746 [2024-12-08 18:40:48.522836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.746 [2024-12-08 18:40:48.522845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.746 [2024-12-08 18:40:48.522855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.746 [2024-12-08 18:40:48.522863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.746 [2024-12-08 18:40:48.522873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.746 [2024-12-08 18:40:48.522881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.746 [2024-12-08 18:40:48.522891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.746 [2024-12-08 18:40:48.522898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.746 [2024-12-08 18:40:48.522908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.746 [2024-12-08 18:40:48.522916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.746 [2024-12-08 18:40:48.522926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.746 [2024-12-08 18:40:48.522934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.746 [2024-12-08 18:40:48.522944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.746 [2024-12-08 18:40:48.522952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.746 [2024-12-08 18:40:48.522976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.746 [2024-12-08 18:40:48.522984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.746 [2024-12-08 18:40:48.522993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.746 [2024-12-08 18:40:48.523001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.746 [2024-12-08 18:40:48.523011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.746 [2024-12-08 18:40:48.523018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.746 [2024-12-08 18:40:48.523028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.746 [2024-12-08 18:40:48.523036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.746 [2024-12-08 18:40:48.523047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.746 [2024-12-08 18:40:48.523055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.746 [2024-12-08 18:40:48.523065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.746 [2024-12-08 18:40:48.523073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.746 [2024-12-08 18:40:48.523083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.746 [2024-12-08 18:40:48.523091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.746 [2024-12-08 18:40:48.523101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:123448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:67424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:58688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.523977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.747 [2024-12-08 18:40:48.523997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.747 [2024-12-08 18:40:48.524006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:119168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:124184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:118424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:127048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:27656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:124536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:124216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:28520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:67136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.524990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.524998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.525008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.525016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.525025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.525033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.525043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:41264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.525051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.525060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:90016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.525068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.525078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.525086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.525095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.525103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.748 [2024-12-08 18:40:48.525113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.748 [2024-12-08 18:40:48.525121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.749 [2024-12-08 18:40:48.525131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.749 [2024-12-08 18:40:48.525139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.749 [2024-12-08 18:40:48.525148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.749 [2024-12-08 18:40:48.525156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.749 [2024-12-08 18:40:48.525165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:49168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.749 [2024-12-08 18:40:48.525173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.749 [2024-12-08 18:40:48.525188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.749 [2024-12-08 18:40:48.525204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.749 [2024-12-08 18:40:48.525213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.749 [2024-12-08 18:40:48.525221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.749 [2024-12-08 18:40:48.525231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:52192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.749 [2024-12-08 18:40:48.525239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.749 [2024-12-08 18:40:48.525249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.749 [2024-12-08 18:40:48.525257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.749 [2024-12-08 18:40:48.525266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.749 [2024-12-08 18:40:48.525274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.749 [2024-12-08 18:40:48.525284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.749 [2024-12-08 18:40:48.525291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.749 [2024-12-08 18:40:48.525301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:85880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.749 [2024-12-08 18:40:48.525309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.749 [2024-12-08 18:40:48.525318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.749 [2024-12-08 18:40:48.525327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.749 [2024-12-08 18:40:48.525336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:56984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.749 [2024-12-08 18:40:48.525344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.749 [2024-12-08 18:40:48.525354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:114592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.749 [2024-12-08 18:40:48.525362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.749 [2024-12-08 18:40:48.525371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:34608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.749 [2024-12-08 18:40:48.525379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.749 [2024-12-08 18:40:48.525389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.749 [2024-12-08 18:40:48.525397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.749 [2024-12-08 18:40:48.525406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.749 [2024-12-08 18:40:48.525415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.749 [2024-12-08 18:40:48.525424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:123064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.749 [2024-12-08 18:40:48.525443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.749 [2024-12-08 18:40:48.525455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:118704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.749 [2024-12-08 18:40:48.525463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.749 [2024-12-08 18:40:48.525473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:118256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.749 [2024-12-08 18:40:48.525481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.749 [2024-12-08 18:40:48.525495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.749 [2024-12-08 18:40:48.525509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.749 [2024-12-08 18:40:48.525519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:119408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.749 [2024-12-08 18:40:48.525528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.749 [2024-12-08 18:40:48.525537] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1100810 is same with the state(6) to be set 00:23:30.749 [2024-12-08 18:40:48.525548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:30.749 [2024-12-08 18:40:48.525554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:30.749 [2024-12-08 18:40:48.525561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53496 len:8 PRP1 0x0 PRP2 0x0 00:23:30.749 [2024-12-08 18:40:48.525570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.749 [2024-12-08 18:40:48.525620] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1100810 was disconnected and freed. reset controller. 00:23:30.749 [2024-12-08 18:40:48.525698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.749 [2024-12-08 18:40:48.525714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.749 [2024-12-08 18:40:48.525724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.749 [2024-12-08 18:40:48.525732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.749 [2024-12-08 18:40:48.525741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.749 [2024-12-08 18:40:48.525749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.749 [2024-12-08 18:40:48.525758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.749 [2024-12-08 18:40:48.525766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.749 [2024-12-08 18:40:48.525773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10df650 is same with the state(6) to be set 00:23:30.750 [2024-12-08 18:40:48.526000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:30.750 [2024-12-08 18:40:48.526040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10df650 (9): Bad file descriptor 00:23:30.750 [2024-12-08 18:40:48.526135] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.750 [2024-12-08 18:40:48.526156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df650 with addr=10.0.0.3, port=4420 00:23:30.750 [2024-12-08 18:40:48.526167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10df650 is same with the state(6) to be set 00:23:30.750 [2024-12-08 18:40:48.526183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10df650 (9): Bad file descriptor 00:23:30.750 [2024-12-08 18:40:48.526197] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:30.750 [2024-12-08 18:40:48.526206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:30.750 [2024-12-08 18:40:48.526216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:30.750 [2024-12-08 18:40:48.526234] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:30.750 18:40:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 96972 00:23:30.750 [2024-12-08 18:40:48.541198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:32.619 10414.00 IOPS, 40.68 MiB/s [2024-12-08T18:40:50.549Z] 6942.67 IOPS, 27.12 MiB/s [2024-12-08T18:40:50.549Z] [2024-12-08 18:40:50.541356] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.619 [2024-12-08 18:40:50.541738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df650 with addr=10.0.0.3, port=4420 00:23:32.619 [2024-12-08 18:40:50.542170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10df650 is same with the state(6) to be set 00:23:32.619 [2024-12-08 18:40:50.542570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10df650 (9): Bad file descriptor 00:23:32.619 [2024-12-08 18:40:50.542988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:32.619 [2024-12-08 18:40:50.543380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:32.619 [2024-12-08 18:40:50.543783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:32.619 [2024-12-08 18:40:50.544048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.619 [2024-12-08 18:40:50.544277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.495 5207.00 IOPS, 20.34 MiB/s [2024-12-08T18:40:52.685Z] 4165.60 IOPS, 16.27 MiB/s [2024-12-08T18:40:52.685Z] [2024-12-08 18:40:52.544787] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.755 [2024-12-08 18:40:52.545170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10df650 with addr=10.0.0.3, port=4420 00:23:34.755 [2024-12-08 18:40:52.545193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10df650 is same with the state(6) to be set 00:23:34.755 [2024-12-08 18:40:52.545229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10df650 (9): Bad file descriptor 00:23:34.755 [2024-12-08 18:40:52.545246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.755 [2024-12-08 18:40:52.545255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.755 [2024-12-08 18:40:52.545264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.755 [2024-12-08 18:40:52.545298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.755 [2024-12-08 18:40:52.545308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.631 3471.33 IOPS, 13.56 MiB/s [2024-12-08T18:40:54.561Z] 2975.43 IOPS, 11.62 MiB/s [2024-12-08T18:40:54.561Z] [2024-12-08 18:40:54.545375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.631 [2024-12-08 18:40:54.545456] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.631 [2024-12-08 18:40:54.545468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.631 [2024-12-08 18:40:54.545477] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:36.631 [2024-12-08 18:40:54.545496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:37.828 2603.50 IOPS, 10.17 MiB/s 00:23:37.828 Latency(us) 00:23:37.828 [2024-12-08T18:40:55.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.828 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:23:37.828 NVMe0n1 : 8.13 2561.52 10.01 15.74 0.00 49587.25 6583.39 7046430.72 00:23:37.828 [2024-12-08T18:40:55.758Z] =================================================================================================================== 00:23:37.828 [2024-12-08T18:40:55.758Z] Total : 2561.52 10.01 15.74 0.00 49587.25 6583.39 7046430.72 00:23:37.828 { 00:23:37.828 "results": [ 00:23:37.828 { 00:23:37.828 "job": "NVMe0n1", 00:23:37.828 "core_mask": "0x4", 00:23:37.828 "workload": "randread", 00:23:37.828 "status": "finished", 00:23:37.828 "queue_depth": 128, 00:23:37.828 "io_size": 4096, 00:23:37.828 "runtime": 8.1311, 00:23:37.828 "iops": 2561.5230411629423, 00:23:37.828 "mibps": 10.005949379542743, 00:23:37.828 "io_failed": 128, 00:23:37.828 "io_timeout": 0, 00:23:37.828 "avg_latency_us": 49587.253928404105, 00:23:37.828 "min_latency_us": 6583.389090909091, 00:23:37.828 "max_latency_us": 7046430.72 00:23:37.828 } 00:23:37.828 ], 00:23:37.828 "core_count": 1 00:23:37.828 } 00:23:37.828 18:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:37.828 Attaching 5 probes... 00:23:37.828 1284.244560: reset bdev controller NVMe0 00:23:37.828 1284.328508: reconnect bdev controller NVMe0 00:23:37.828 3299.509949: reconnect delay bdev controller NVMe0 00:23:37.828 3299.527439: reconnect bdev controller NVMe0 00:23:37.828 5302.957697: reconnect delay bdev controller NVMe0 00:23:37.828 5302.973966: reconnect bdev controller NVMe0 00:23:37.828 7303.595383: reconnect delay bdev controller NVMe0 00:23:37.828 7303.612160: reconnect bdev controller NVMe0 00:23:37.828 18:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:23:37.828 18:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:23:37.828 18:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 96934 00:23:37.828 18:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:37.828 18:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 96927 00:23:37.828 18:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 96927 ']' 00:23:37.828 18:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 96927 00:23:37.828 18:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:23:37.828 18:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:37.828 18:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96927 00:23:37.828 killing process with pid 96927 00:23:37.828 Received shutdown signal, test time was about 8.198697 seconds 00:23:37.828 00:23:37.828 Latency(us) 00:23:37.828 [2024-12-08T18:40:55.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.828 [2024-12-08T18:40:55.758Z] =================================================================================================================== 00:23:37.828 [2024-12-08T18:40:55.758Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:37.828 18:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:37.828 18:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:37.828 18:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96927' 00:23:37.828 18:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 96927 00:23:37.828 18:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 96927 00:23:38.087 18:40:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:38.346 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:23:38.346 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:23:38.346 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:38.346 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:23:38.346 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:38.346 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:23:38.346 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:38.346 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:38.346 rmmod nvme_tcp 00:23:38.346 rmmod nvme_fabrics 00:23:38.346 rmmod nvme_keyring 00:23:38.346 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:38.346 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:23:38.346 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:23:38.346 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@513 -- # '[' -n 96506 ']' 00:23:38.346 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@514 -- # killprocess 96506 00:23:38.346 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 96506 ']' 00:23:38.346 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 96506 00:23:38.346 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:23:38.346 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:38.346 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96506 00:23:38.346 killing process with pid 96506 00:23:38.347 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:38.347 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:38.347 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96506' 00:23:38.347 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 96506 00:23:38.347 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 96506 00:23:38.606 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:38.606 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:38.606 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:38.606 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:23:38.606 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # iptables-save 00:23:38.606 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:23:38.606 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:38.606 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:38.606 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:38.606 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:38.606 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:38.606 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:38.606 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:38.606 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:38.606 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:38.606 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:38.606 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:38.606 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:38.865 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:38.865 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:38.865 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:38.865 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:38.865 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:38.865 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.865 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:38.865 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.865 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:23:38.865 ************************************ 00:23:38.865 END TEST nvmf_timeout 00:23:38.865 ************************************ 00:23:38.865 00:23:38.865 real 0m45.323s 00:23:38.865 user 2m11.707s 00:23:38.865 sys 0m5.990s 00:23:38.865 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:38.865 18:40:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:38.865 18:40:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:23:38.865 18:40:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:23:38.865 00:23:38.865 real 5m47.711s 00:23:38.865 user 16m11.545s 00:23:38.865 sys 1m19.844s 00:23:38.865 ************************************ 00:23:38.865 END TEST nvmf_host 00:23:38.865 ************************************ 00:23:38.865 18:40:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:38.865 18:40:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.865 18:40:56 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:23:38.865 18:40:56 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:23:38.865 ************************************ 00:23:38.865 END TEST nvmf_tcp 00:23:38.865 ************************************ 00:23:38.865 00:23:38.865 real 15m12.573s 00:23:38.865 user 39m53.452s 00:23:38.865 sys 4m1.670s 00:23:38.865 18:40:56 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:38.865 18:40:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:39.125 18:40:56 -- spdk/autotest.sh@281 -- # [[ 1 -eq 0 ]] 00:23:39.125 18:40:56 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:39.125 18:40:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:39.125 18:40:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:39.125 18:40:56 -- common/autotest_common.sh@10 -- # set +x 00:23:39.125 ************************************ 00:23:39.125 START TEST nvmf_dif 00:23:39.125 ************************************ 00:23:39.125 18:40:56 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:39.125 * Looking for test storage... 00:23:39.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:39.125 18:40:56 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:39.125 18:40:56 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:23:39.125 18:40:56 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:39.125 18:40:56 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:39.125 18:40:56 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:39.125 18:40:56 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:39.125 18:40:56 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:39.125 18:40:56 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:23:39.125 18:40:56 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:23:39.125 18:40:56 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:23:39.125 18:40:56 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:23:39.125 18:40:56 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:23:39.125 18:40:56 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:23:39.125 18:40:56 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:23:39.125 18:40:56 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:39.125 18:40:56 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:23:39.125 18:40:56 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:23:39.125 18:40:56 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:39.125 18:40:56 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:39.125 18:40:56 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:23:39.125 18:40:56 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:23:39.125 18:40:56 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:39.125 18:40:56 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:23:39.125 18:40:56 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:23:39.125 18:40:56 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:23:39.125 18:40:56 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:23:39.125 18:40:56 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:39.125 18:40:56 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:23:39.125 18:40:56 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:23:39.125 18:40:57 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:39.125 18:40:57 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:39.125 18:40:57 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:23:39.125 18:40:57 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:39.125 18:40:57 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:39.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.125 --rc genhtml_branch_coverage=1 00:23:39.125 --rc genhtml_function_coverage=1 00:23:39.125 --rc genhtml_legend=1 00:23:39.125 --rc geninfo_all_blocks=1 00:23:39.125 --rc geninfo_unexecuted_blocks=1 00:23:39.125 00:23:39.125 ' 00:23:39.125 18:40:57 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:39.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.125 --rc genhtml_branch_coverage=1 00:23:39.125 --rc genhtml_function_coverage=1 00:23:39.125 --rc genhtml_legend=1 00:23:39.125 --rc geninfo_all_blocks=1 00:23:39.125 --rc geninfo_unexecuted_blocks=1 00:23:39.125 00:23:39.125 ' 00:23:39.125 18:40:57 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:39.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.125 --rc genhtml_branch_coverage=1 00:23:39.125 --rc genhtml_function_coverage=1 00:23:39.125 --rc genhtml_legend=1 00:23:39.125 --rc geninfo_all_blocks=1 00:23:39.125 --rc geninfo_unexecuted_blocks=1 00:23:39.125 00:23:39.125 ' 00:23:39.125 18:40:57 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:39.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.125 --rc genhtml_branch_coverage=1 00:23:39.125 --rc genhtml_function_coverage=1 00:23:39.125 --rc genhtml_legend=1 00:23:39.125 --rc geninfo_all_blocks=1 00:23:39.125 --rc geninfo_unexecuted_blocks=1 00:23:39.126 00:23:39.126 ' 00:23:39.126 18:40:57 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:39.126 18:40:57 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:23:39.126 18:40:57 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.126 18:40:57 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.126 18:40:57 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.126 18:40:57 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.126 18:40:57 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.126 18:40:57 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.126 18:40:57 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:23:39.126 18:40:57 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:39.126 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:39.126 18:40:57 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:23:39.126 18:40:57 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:23:39.126 18:40:57 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:23:39.126 18:40:57 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:23:39.126 18:40:57 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.126 18:40:57 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:39.126 18:40:57 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@456 -- # nvmf_veth_init 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:39.126 18:40:57 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:39.386 Cannot find device "nvmf_init_br" 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@162 -- # true 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:39.386 Cannot find device "nvmf_init_br2" 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@163 -- # true 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:39.386 Cannot find device "nvmf_tgt_br" 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@164 -- # true 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:39.386 Cannot find device "nvmf_tgt_br2" 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@165 -- # true 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:39.386 Cannot find device "nvmf_init_br" 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@166 -- # true 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:39.386 Cannot find device "nvmf_init_br2" 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@167 -- # true 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:39.386 Cannot find device "nvmf_tgt_br" 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@168 -- # true 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:39.386 Cannot find device "nvmf_tgt_br2" 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@169 -- # true 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:39.386 Cannot find device "nvmf_br" 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@170 -- # true 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:39.386 Cannot find device "nvmf_init_if" 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@171 -- # true 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:39.386 Cannot find device "nvmf_init_if2" 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@172 -- # true 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:39.386 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@173 -- # true 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:39.386 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@174 -- # true 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:39.386 18:40:57 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:39.645 18:40:57 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:39.645 18:40:57 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:39.645 18:40:57 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:39.645 18:40:57 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:39.645 18:40:57 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:39.645 18:40:57 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:39.645 18:40:57 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:39.645 18:40:57 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:39.645 18:40:57 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:39.645 18:40:57 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:39.645 18:40:57 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:39.645 18:40:57 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:39.645 18:40:57 nvmf_dif -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:39.646 18:40:57 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:39.646 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:39.646 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:23:39.646 00:23:39.646 --- 10.0.0.3 ping statistics --- 00:23:39.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.646 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:23:39.646 18:40:57 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:39.646 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:39.646 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:23:39.646 00:23:39.646 --- 10.0.0.4 ping statistics --- 00:23:39.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.646 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:23:39.646 18:40:57 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:39.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:23:39.646 00:23:39.646 --- 10.0.0.1 ping statistics --- 00:23:39.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.646 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:23:39.646 18:40:57 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:39.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:23:39.646 00:23:39.646 --- 10.0.0.2 ping statistics --- 00:23:39.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.646 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:23:39.646 18:40:57 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.646 18:40:57 nvmf_dif -- nvmf/common.sh@457 -- # return 0 00:23:39.646 18:40:57 nvmf_dif -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:23:39.646 18:40:57 nvmf_dif -- nvmf/common.sh@475 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:39.905 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:39.905 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:39.905 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:39.905 18:40:57 nvmf_dif -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.905 18:40:57 nvmf_dif -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:39.905 18:40:57 nvmf_dif -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:39.905 18:40:57 nvmf_dif -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.905 18:40:57 nvmf_dif -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:39.905 18:40:57 nvmf_dif -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:39.905 18:40:57 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:23:39.905 18:40:57 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:23:39.905 18:40:57 nvmf_dif -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:39.905 18:40:57 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:39.905 18:40:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:40.165 18:40:57 nvmf_dif -- nvmf/common.sh@505 -- # nvmfpid=97471 00:23:40.165 18:40:57 nvmf_dif -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:40.165 18:40:57 nvmf_dif -- nvmf/common.sh@506 -- # waitforlisten 97471 00:23:40.165 18:40:57 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 97471 ']' 00:23:40.165 18:40:57 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.165 18:40:57 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:40.165 18:40:57 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.165 18:40:57 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:40.165 18:40:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:40.165 [2024-12-08 18:40:57.894026] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:40.165 [2024-12-08 18:40:57.894296] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.165 [2024-12-08 18:40:58.036314] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.424 [2024-12-08 18:40:58.123575] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.424 [2024-12-08 18:40:58.123661] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.424 [2024-12-08 18:40:58.123677] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.424 [2024-12-08 18:40:58.123688] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.424 [2024-12-08 18:40:58.123698] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.424 [2024-12-08 18:40:58.123738] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.424 [2024-12-08 18:40:58.205493] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:40.424 18:40:58 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:40.424 18:40:58 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:23:40.424 18:40:58 nvmf_dif -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:40.424 18:40:58 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:40.424 18:40:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:40.424 18:40:58 nvmf_dif -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.424 18:40:58 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:23:40.424 18:40:58 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:23:40.424 18:40:58 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.424 18:40:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:40.424 [2024-12-08 18:40:58.334304] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.424 18:40:58 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.424 18:40:58 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:23:40.424 18:40:58 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:40.424 18:40:58 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:40.424 18:40:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:40.424 ************************************ 00:23:40.424 START TEST fio_dif_1_default 00:23:40.425 ************************************ 00:23:40.425 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:23:40.425 18:40:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:23:40.425 18:40:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:23:40.425 18:40:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:23:40.425 18:40:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:23:40.425 18:40:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:23:40.425 18:40:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:40.425 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:40.684 bdev_null0 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:40.684 [2024-12-08 18:40:58.382426] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # config=() 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # local subsystem config 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:40.684 { 00:23:40.684 "params": { 00:23:40.684 "name": "Nvme$subsystem", 00:23:40.684 "trtype": "$TEST_TRANSPORT", 00:23:40.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.684 "adrfam": "ipv4", 00:23:40.684 "trsvcid": "$NVMF_PORT", 00:23:40.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.684 "hdgst": ${hdgst:-false}, 00:23:40.684 "ddgst": ${ddgst:-false} 00:23:40.684 }, 00:23:40.684 "method": "bdev_nvme_attach_controller" 00:23:40.684 } 00:23:40.684 EOF 00:23:40.684 )") 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # cat 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # jq . 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@581 -- # IFS=, 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:23:40.684 "params": { 00:23:40.684 "name": "Nvme0", 00:23:40.684 "trtype": "tcp", 00:23:40.684 "traddr": "10.0.0.3", 00:23:40.684 "adrfam": "ipv4", 00:23:40.684 "trsvcid": "4420", 00:23:40.684 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:40.684 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:40.684 "hdgst": false, 00:23:40.684 "ddgst": false 00:23:40.684 }, 00:23:40.684 "method": "bdev_nvme_attach_controller" 00:23:40.684 }' 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:40.684 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:40.685 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:40.685 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:40.685 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:40.685 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:40.685 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:40.685 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:40.685 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:40.685 18:40:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:40.685 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:40.685 fio-3.35 00:23:40.685 Starting 1 thread 00:23:52.894 00:23:52.894 filename0: (groupid=0, jobs=1): err= 0: pid=97529: Sun Dec 8 18:41:09 2024 00:23:52.894 read: IOPS=10.4k, BW=40.5MiB/s (42.4MB/s)(405MiB/10001msec) 00:23:52.894 slat (nsec): min=5973, max=70064, avg=7884.12, stdev=3403.59 00:23:52.894 clat (usec): min=329, max=3558, avg=362.08, stdev=35.44 00:23:52.894 lat (usec): min=335, max=3597, avg=369.96, stdev=36.32 00:23:52.894 clat percentiles (usec): 00:23:52.894 | 1.00th=[ 334], 5.00th=[ 334], 10.00th=[ 338], 20.00th=[ 347], 00:23:52.894 | 30.00th=[ 351], 40.00th=[ 355], 50.00th=[ 359], 60.00th=[ 363], 00:23:52.894 | 70.00th=[ 367], 80.00th=[ 375], 90.00th=[ 388], 95.00th=[ 404], 00:23:52.894 | 99.00th=[ 441], 99.50th=[ 457], 99.90th=[ 515], 99.95th=[ 553], 00:23:52.894 | 99.99th=[ 1926] 00:23:52.894 bw ( KiB/s): min=38438, max=42176, per=100.00%, avg=41457.16, stdev=770.71, samples=19 00:23:52.894 iops : min= 9609, max=10544, avg=10364.26, stdev=192.79, samples=19 00:23:52.894 lat (usec) : 500=99.86%, 750=0.11%, 1000=0.01% 00:23:52.894 lat (msec) : 2=0.01%, 4=0.01% 00:23:52.894 cpu : usr=79.02%, sys=18.43%, ctx=23, majf=0, minf=4 00:23:52.894 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:52.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.894 issued rwts: total=103576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.894 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:52.894 00:23:52.894 Run status group 0 (all jobs): 00:23:52.894 READ: bw=40.5MiB/s (42.4MB/s), 40.5MiB/s-40.5MiB/s (42.4MB/s-42.4MB/s), io=405MiB (424MB), run=10001-10001msec 00:23:52.894 18:41:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:23:52.894 18:41:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:23:52.894 18:41:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:23:52.894 18:41:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:52.894 18:41:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:23:52.894 18:41:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:52.894 18:41:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.894 18:41:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:52.894 18:41:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.894 18:41:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:52.894 18:41:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.894 18:41:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:52.894 ************************************ 00:23:52.894 END TEST fio_dif_1_default 00:23:52.894 ************************************ 00:23:52.894 18:41:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.894 00:23:52.894 real 0m10.970s 00:23:52.894 user 0m8.500s 00:23:52.894 sys 0m2.132s 00:23:52.894 18:41:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:52.894 18:41:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:52.894 18:41:09 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:23:52.894 18:41:09 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:52.894 18:41:09 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:52.894 18:41:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:52.894 ************************************ 00:23:52.894 START TEST fio_dif_1_multi_subsystems 00:23:52.894 ************************************ 00:23:52.894 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:23:52.894 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:23:52.894 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:23:52.894 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:23:52.894 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:52.894 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:23:52.894 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:52.895 bdev_null0 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:52.895 [2024-12-08 18:41:09.400556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:52.895 bdev_null1 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # config=() 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # local subsystem config 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:52.895 { 00:23:52.895 "params": { 00:23:52.895 "name": "Nvme$subsystem", 00:23:52.895 "trtype": "$TEST_TRANSPORT", 00:23:52.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.895 "adrfam": "ipv4", 00:23:52.895 "trsvcid": "$NVMF_PORT", 00:23:52.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.895 "hdgst": ${hdgst:-false}, 00:23:52.895 "ddgst": ${ddgst:-false} 00:23:52.895 }, 00:23:52.895 "method": "bdev_nvme_attach_controller" 00:23:52.895 } 00:23:52.895 EOF 00:23:52.895 )") 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:52.895 { 00:23:52.895 "params": { 00:23:52.895 "name": "Nvme$subsystem", 00:23:52.895 "trtype": "$TEST_TRANSPORT", 00:23:52.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.895 "adrfam": "ipv4", 00:23:52.895 "trsvcid": "$NVMF_PORT", 00:23:52.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.895 "hdgst": ${hdgst:-false}, 00:23:52.895 "ddgst": ${ddgst:-false} 00:23:52.895 }, 00:23:52.895 "method": "bdev_nvme_attach_controller" 00:23:52.895 } 00:23:52.895 EOF 00:23:52.895 )") 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # jq . 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@581 -- # IFS=, 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:23:52.895 "params": { 00:23:52.895 "name": "Nvme0", 00:23:52.895 "trtype": "tcp", 00:23:52.895 "traddr": "10.0.0.3", 00:23:52.895 "adrfam": "ipv4", 00:23:52.895 "trsvcid": "4420", 00:23:52.895 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:52.895 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:52.895 "hdgst": false, 00:23:52.895 "ddgst": false 00:23:52.895 }, 00:23:52.895 "method": "bdev_nvme_attach_controller" 00:23:52.895 },{ 00:23:52.895 "params": { 00:23:52.895 "name": "Nvme1", 00:23:52.895 "trtype": "tcp", 00:23:52.895 "traddr": "10.0.0.3", 00:23:52.895 "adrfam": "ipv4", 00:23:52.895 "trsvcid": "4420", 00:23:52.895 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.895 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:52.895 "hdgst": false, 00:23:52.895 "ddgst": false 00:23:52.895 }, 00:23:52.895 "method": "bdev_nvme_attach_controller" 00:23:52.895 }' 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:52.895 18:41:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:52.895 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:52.895 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:52.895 fio-3.35 00:23:52.895 Starting 2 threads 00:24:02.876 00:24:02.876 filename0: (groupid=0, jobs=1): err= 0: pid=97690: Sun Dec 8 18:41:20 2024 00:24:02.876 read: IOPS=4922, BW=19.2MiB/s (20.2MB/s)(192MiB/10001msec) 00:24:02.876 slat (nsec): min=5829, max=94250, avg=20402.93, stdev=9476.83 00:24:02.876 clat (usec): min=502, max=2066, avg=757.61, stdev=76.41 00:24:02.876 lat (usec): min=508, max=2087, avg=778.01, stdev=79.23 00:24:02.876 clat percentiles (usec): 00:24:02.876 | 1.00th=[ 627], 5.00th=[ 652], 10.00th=[ 676], 20.00th=[ 701], 00:24:02.876 | 30.00th=[ 717], 40.00th=[ 734], 50.00th=[ 750], 60.00th=[ 766], 00:24:02.877 | 70.00th=[ 783], 80.00th=[ 807], 90.00th=[ 840], 95.00th=[ 873], 00:24:02.877 | 99.00th=[ 1004], 99.50th=[ 1074], 99.90th=[ 1156], 99.95th=[ 1811], 00:24:02.877 | 99.99th=[ 1991] 00:24:02.877 bw ( KiB/s): min=18336, max=20032, per=50.04%, avg=19703.58, stdev=377.10, samples=19 00:24:02.877 iops : min= 4584, max= 5008, avg=4925.89, stdev=94.27, samples=19 00:24:02.877 lat (usec) : 750=48.59%, 1000=50.41% 00:24:02.877 lat (msec) : 2=0.99%, 4=0.01% 00:24:02.877 cpu : usr=93.15%, sys=5.65%, ctx=119, majf=0, minf=0 00:24:02.877 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:02.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.877 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.877 issued rwts: total=49228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.877 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:02.877 filename1: (groupid=0, jobs=1): err= 0: pid=97691: Sun Dec 8 18:41:20 2024 00:24:02.877 read: IOPS=4922, BW=19.2MiB/s (20.2MB/s)(192MiB/10001msec) 00:24:02.877 slat (nsec): min=5849, max=93589, avg=19890.34, stdev=9060.51 00:24:02.877 clat (usec): min=456, max=2023, avg=759.04, stdev=68.33 00:24:02.877 lat (usec): min=477, max=2047, avg=778.93, stdev=70.10 00:24:02.877 clat percentiles (usec): 00:24:02.877 | 1.00th=[ 652], 5.00th=[ 676], 10.00th=[ 693], 20.00th=[ 709], 00:24:02.877 | 30.00th=[ 725], 40.00th=[ 742], 50.00th=[ 750], 60.00th=[ 766], 00:24:02.877 | 70.00th=[ 783], 80.00th=[ 799], 90.00th=[ 832], 95.00th=[ 857], 00:24:02.877 | 99.00th=[ 1004], 99.50th=[ 1074], 99.90th=[ 1156], 99.95th=[ 1811], 00:24:02.877 | 99.99th=[ 1975] 00:24:02.877 bw ( KiB/s): min=18336, max=20032, per=50.03%, avg=19701.47, stdev=376.03, samples=19 00:24:02.877 iops : min= 4584, max= 5008, avg=4925.37, stdev=94.01, samples=19 00:24:02.877 lat (usec) : 500=0.01%, 750=49.29%, 1000=49.68% 00:24:02.877 lat (msec) : 2=1.02%, 4=0.01% 00:24:02.877 cpu : usr=93.66%, sys=5.13%, ctx=11, majf=0, minf=0 00:24:02.877 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:02.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.877 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.877 issued rwts: total=49225,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.877 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:02.877 00:24:02.877 Run status group 0 (all jobs): 00:24:02.877 READ: bw=38.5MiB/s (40.3MB/s), 19.2MiB/s-19.2MiB/s (20.2MB/s-20.2MB/s), io=385MiB (403MB), run=10001-10001msec 00:24:02.877 18:41:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:24:02.877 18:41:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:24:02.877 18:41:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:24:02.877 18:41:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:02.877 18:41:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:24:02.877 18:41:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:02.877 18:41:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.877 18:41:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:02.877 18:41:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.877 18:41:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:02.877 18:41:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.877 18:41:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:02.877 18:41:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.877 18:41:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:24:02.877 18:41:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:02.877 18:41:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:24:02.877 18:41:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:02.877 18:41:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.877 18:41:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:02.877 18:41:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.877 18:41:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:02.877 18:41:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.877 18:41:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:02.877 ************************************ 00:24:02.877 END TEST fio_dif_1_multi_subsystems 00:24:02.877 ************************************ 00:24:02.877 18:41:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.877 00:24:02.877 real 0m11.073s 00:24:02.877 user 0m19.377s 00:24:02.877 sys 0m1.376s 00:24:02.877 18:41:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:02.877 18:41:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:02.877 18:41:20 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:24:02.877 18:41:20 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:02.877 18:41:20 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:02.877 18:41:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:02.877 ************************************ 00:24:02.877 START TEST fio_dif_rand_params 00:24:02.877 ************************************ 00:24:02.877 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:24:02.877 18:41:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:24:02.877 18:41:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:24:02.877 18:41:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:24:02.877 18:41:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:24:02.877 18:41:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:24:02.877 18:41:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:24:02.877 18:41:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:24:02.877 18:41:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:24:02.877 18:41:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:02.877 18:41:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:02.877 18:41:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:02.877 18:41:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:02.877 18:41:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:02.877 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.877 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.877 bdev_null0 00:24:02.877 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.877 18:41:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:02.877 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.877 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.877 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.877 18:41:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:02.877 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.878 [2024-12-08 18:41:20.527262] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:02.878 { 00:24:02.878 "params": { 00:24:02.878 "name": "Nvme$subsystem", 00:24:02.878 "trtype": "$TEST_TRANSPORT", 00:24:02.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.878 "adrfam": "ipv4", 00:24:02.878 "trsvcid": "$NVMF_PORT", 00:24:02.878 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.878 "hdgst": ${hdgst:-false}, 00:24:02.878 "ddgst": ${ddgst:-false} 00:24:02.878 }, 00:24:02.878 "method": "bdev_nvme_attach_controller" 00:24:02.878 } 00:24:02.878 EOF 00:24:02.878 )") 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:02.878 "params": { 00:24:02.878 "name": "Nvme0", 00:24:02.878 "trtype": "tcp", 00:24:02.878 "traddr": "10.0.0.3", 00:24:02.878 "adrfam": "ipv4", 00:24:02.878 "trsvcid": "4420", 00:24:02.878 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:02.878 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:02.878 "hdgst": false, 00:24:02.878 "ddgst": false 00:24:02.878 }, 00:24:02.878 "method": "bdev_nvme_attach_controller" 00:24:02.878 }' 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:02.878 18:41:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:02.878 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:02.878 ... 00:24:02.878 fio-3.35 00:24:02.878 Starting 3 threads 00:24:09.550 00:24:09.550 filename0: (groupid=0, jobs=1): err= 0: pid=97847: Sun Dec 8 18:41:26 2024 00:24:09.550 read: IOPS=308, BW=38.5MiB/s (40.4MB/s)(193MiB/5002msec) 00:24:09.550 slat (nsec): min=5875, max=45501, avg=9160.40, stdev=4017.64 00:24:09.550 clat (usec): min=9192, max=19582, avg=9707.00, stdev=1111.82 00:24:09.550 lat (usec): min=9215, max=19592, avg=9716.16, stdev=1111.73 00:24:09.550 clat percentiles (usec): 00:24:09.550 | 1.00th=[ 9241], 5.00th=[ 9241], 10.00th=[ 9241], 20.00th=[ 9372], 00:24:09.550 | 30.00th=[ 9372], 40.00th=[ 9372], 50.00th=[ 9372], 60.00th=[ 9372], 00:24:09.550 | 70.00th=[ 9503], 80.00th=[ 9634], 90.00th=[10290], 95.00th=[10421], 00:24:09.550 | 99.00th=[15008], 99.50th=[15139], 99.90th=[19530], 99.95th=[19530], 00:24:09.550 | 99.99th=[19530] 00:24:09.550 bw ( KiB/s): min=33792, max=40704, per=33.21%, avg=39338.67, stdev=2736.33, samples=9 00:24:09.550 iops : min= 264, max= 318, avg=307.33, stdev=21.38, samples=9 00:24:09.550 lat (msec) : 10=85.28%, 20=14.72% 00:24:09.550 cpu : usr=92.70%, sys=6.76%, ctx=14, majf=0, minf=9 00:24:09.550 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:09.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:09.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:09.550 issued rwts: total=1542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:09.550 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:09.550 filename0: (groupid=0, jobs=1): err= 0: pid=97848: Sun Dec 8 18:41:26 2024 00:24:09.550 read: IOPS=308, BW=38.6MiB/s (40.5MB/s)(193MiB/5005msec) 00:24:09.550 slat (nsec): min=6020, max=54586, avg=13412.04, stdev=7019.71 00:24:09.550 clat (usec): min=7017, max=20351, avg=9684.16, stdev=1107.74 00:24:09.550 lat (usec): min=7040, max=20365, avg=9697.58, stdev=1107.90 00:24:09.550 clat percentiles (usec): 00:24:09.550 | 1.00th=[ 9241], 5.00th=[ 9241], 10.00th=[ 9241], 20.00th=[ 9241], 00:24:09.550 | 30.00th=[ 9372], 40.00th=[ 9372], 50.00th=[ 9372], 60.00th=[ 9372], 00:24:09.550 | 70.00th=[ 9503], 80.00th=[ 9634], 90.00th=[10290], 95.00th=[10421], 00:24:09.550 | 99.00th=[15008], 99.50th=[15008], 99.90th=[20317], 99.95th=[20317], 00:24:09.550 | 99.99th=[20317] 00:24:09.550 bw ( KiB/s): min=33792, max=40704, per=33.21%, avg=39338.67, stdev=2736.33, samples=9 00:24:09.550 iops : min= 264, max= 318, avg=307.33, stdev=21.38, samples=9 00:24:09.550 lat (msec) : 10=85.89%, 20=13.92%, 50=0.19% 00:24:09.550 cpu : usr=94.12%, sys=5.28%, ctx=32, majf=0, minf=9 00:24:09.550 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:09.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:09.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:09.550 issued rwts: total=1545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:09.550 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:09.550 filename0: (groupid=0, jobs=1): err= 0: pid=97849: Sun Dec 8 18:41:26 2024 00:24:09.550 read: IOPS=308, BW=38.6MiB/s (40.5MB/s)(193MiB/5005msec) 00:24:09.550 slat (nsec): min=5929, max=58795, avg=11953.25, stdev=6225.43 00:24:09.550 clat (usec): min=6977, max=20354, avg=9687.94, stdev=1112.40 00:24:09.550 lat (usec): min=6983, max=20365, avg=9699.90, stdev=1112.79 00:24:09.550 clat percentiles (usec): 00:24:09.550 | 1.00th=[ 9241], 5.00th=[ 9241], 10.00th=[ 9241], 20.00th=[ 9372], 00:24:09.550 | 30.00th=[ 9372], 40.00th=[ 9372], 50.00th=[ 9372], 60.00th=[ 9372], 00:24:09.550 | 70.00th=[ 9503], 80.00th=[ 9634], 90.00th=[10290], 95.00th=[10552], 00:24:09.550 | 99.00th=[15008], 99.50th=[15008], 99.90th=[20317], 99.95th=[20317], 00:24:09.550 | 99.99th=[20317] 00:24:09.550 bw ( KiB/s): min=33792, max=40704, per=33.21%, avg=39338.67, stdev=2736.33, samples=9 00:24:09.550 iops : min= 264, max= 318, avg=307.33, stdev=21.38, samples=9 00:24:09.550 lat (msec) : 10=85.83%, 20=13.98%, 50=0.19% 00:24:09.550 cpu : usr=92.85%, sys=6.59%, ctx=9, majf=0, minf=9 00:24:09.550 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:09.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:09.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:09.550 issued rwts: total=1545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:09.550 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:09.550 00:24:09.550 Run status group 0 (all jobs): 00:24:09.550 READ: bw=116MiB/s (121MB/s), 38.5MiB/s-38.6MiB/s (40.4MB/s-40.5MB/s), io=579MiB (607MB), run=5002-5005msec 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:09.550 bdev_null0 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:09.550 [2024-12-08 18:41:26.464207] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:09.550 bdev_null1 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.550 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:09.551 bdev_null2 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:09.551 { 00:24:09.551 "params": { 00:24:09.551 "name": "Nvme$subsystem", 00:24:09.551 "trtype": "$TEST_TRANSPORT", 00:24:09.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.551 "adrfam": "ipv4", 00:24:09.551 "trsvcid": "$NVMF_PORT", 00:24:09.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.551 "hdgst": ${hdgst:-false}, 00:24:09.551 "ddgst": ${ddgst:-false} 00:24:09.551 }, 00:24:09.551 "method": "bdev_nvme_attach_controller" 00:24:09.551 } 00:24:09.551 EOF 00:24:09.551 )") 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:09.551 { 00:24:09.551 "params": { 00:24:09.551 "name": "Nvme$subsystem", 00:24:09.551 "trtype": "$TEST_TRANSPORT", 00:24:09.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.551 "adrfam": "ipv4", 00:24:09.551 "trsvcid": "$NVMF_PORT", 00:24:09.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.551 "hdgst": ${hdgst:-false}, 00:24:09.551 "ddgst": ${ddgst:-false} 00:24:09.551 }, 00:24:09.551 "method": "bdev_nvme_attach_controller" 00:24:09.551 } 00:24:09.551 EOF 00:24:09.551 )") 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:09.551 { 00:24:09.551 "params": { 00:24:09.551 "name": "Nvme$subsystem", 00:24:09.551 "trtype": "$TEST_TRANSPORT", 00:24:09.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.551 "adrfam": "ipv4", 00:24:09.551 "trsvcid": "$NVMF_PORT", 00:24:09.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.551 "hdgst": ${hdgst:-false}, 00:24:09.551 "ddgst": ${ddgst:-false} 00:24:09.551 }, 00:24:09.551 "method": "bdev_nvme_attach_controller" 00:24:09.551 } 00:24:09.551 EOF 00:24:09.551 )") 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:09.551 "params": { 00:24:09.551 "name": "Nvme0", 00:24:09.551 "trtype": "tcp", 00:24:09.551 "traddr": "10.0.0.3", 00:24:09.551 "adrfam": "ipv4", 00:24:09.551 "trsvcid": "4420", 00:24:09.551 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:09.551 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:09.551 "hdgst": false, 00:24:09.551 "ddgst": false 00:24:09.551 }, 00:24:09.551 "method": "bdev_nvme_attach_controller" 00:24:09.551 },{ 00:24:09.551 "params": { 00:24:09.551 "name": "Nvme1", 00:24:09.551 "trtype": "tcp", 00:24:09.551 "traddr": "10.0.0.3", 00:24:09.551 "adrfam": "ipv4", 00:24:09.551 "trsvcid": "4420", 00:24:09.551 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:09.551 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:09.551 "hdgst": false, 00:24:09.551 "ddgst": false 00:24:09.551 }, 00:24:09.551 "method": "bdev_nvme_attach_controller" 00:24:09.551 },{ 00:24:09.551 "params": { 00:24:09.551 "name": "Nvme2", 00:24:09.551 "trtype": "tcp", 00:24:09.551 "traddr": "10.0.0.3", 00:24:09.551 "adrfam": "ipv4", 00:24:09.551 "trsvcid": "4420", 00:24:09.551 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:09.551 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:09.551 "hdgst": false, 00:24:09.551 "ddgst": false 00:24:09.551 }, 00:24:09.551 "method": "bdev_nvme_attach_controller" 00:24:09.551 }' 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:09.551 18:41:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:09.551 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:09.551 ... 00:24:09.551 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:09.551 ... 00:24:09.551 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:09.551 ... 00:24:09.551 fio-3.35 00:24:09.551 Starting 24 threads 00:24:21.755 00:24:21.755 filename0: (groupid=0, jobs=1): err= 0: pid=97943: Sun Dec 8 18:41:37 2024 00:24:21.755 read: IOPS=232, BW=930KiB/s (953kB/s)(9316KiB/10012msec) 00:24:21.755 slat (usec): min=3, max=8044, avg=44.26, stdev=397.91 00:24:21.755 clat (msec): min=11, max=125, avg=68.60, stdev=18.38 00:24:21.755 lat (msec): min=11, max=125, avg=68.64, stdev=18.40 00:24:21.755 clat percentiles (msec): 00:24:21.755 | 1.00th=[ 31], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 57], 00:24:21.755 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 67], 60.00th=[ 72], 00:24:21.755 | 70.00th=[ 75], 80.00th=[ 88], 90.00th=[ 96], 95.00th=[ 100], 00:24:21.755 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 112], 99.95th=[ 115], 00:24:21.755 | 99.99th=[ 126] 00:24:21.755 bw ( KiB/s): min= 712, max= 1128, per=4.07%, avg=910.32, stdev=130.82, samples=19 00:24:21.755 iops : min= 178, max= 282, avg=227.58, stdev=32.70, samples=19 00:24:21.755 lat (msec) : 20=0.56%, 50=15.59%, 100=79.60%, 250=4.25% 00:24:21.755 cpu : usr=36.25%, sys=1.37%, ctx=1002, majf=0, minf=9 00:24:21.755 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=82.3%, 16=16.9%, 32=0.0%, >=64=0.0% 00:24:21.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.755 complete : 0=0.0%, 4=87.9%, 8=12.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.755 issued rwts: total=2329,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.755 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.755 filename0: (groupid=0, jobs=1): err= 0: pid=97944: Sun Dec 8 18:41:37 2024 00:24:21.755 read: IOPS=238, BW=954KiB/s (977kB/s)(9584KiB/10047msec) 00:24:21.755 slat (usec): min=6, max=8033, avg=22.35, stdev=183.29 00:24:21.755 clat (msec): min=5, max=131, avg=66.92, stdev=20.31 00:24:21.755 lat (msec): min=5, max=131, avg=66.94, stdev=20.31 00:24:21.755 clat percentiles (msec): 00:24:21.755 | 1.00th=[ 6], 5.00th=[ 37], 10.00th=[ 46], 20.00th=[ 52], 00:24:21.755 | 30.00th=[ 60], 40.00th=[ 62], 50.00th=[ 64], 60.00th=[ 70], 00:24:21.755 | 70.00th=[ 73], 80.00th=[ 86], 90.00th=[ 96], 95.00th=[ 100], 00:24:21.755 | 99.00th=[ 107], 99.50th=[ 108], 99.90th=[ 128], 99.95th=[ 130], 00:24:21.755 | 99.99th=[ 132] 00:24:21.755 bw ( KiB/s): min= 712, max= 1528, per=4.25%, avg=951.65, stdev=184.82, samples=20 00:24:21.755 iops : min= 178, max= 382, avg=237.90, stdev=46.21, samples=20 00:24:21.755 lat (msec) : 10=2.59%, 50=16.19%, 100=77.09%, 250=4.13% 00:24:21.755 cpu : usr=37.99%, sys=1.47%, ctx=1112, majf=0, minf=9 00:24:21.755 IO depths : 1=0.1%, 2=0.4%, 4=1.3%, 8=81.5%, 16=16.7%, 32=0.0%, >=64=0.0% 00:24:21.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.755 complete : 0=0.0%, 4=87.9%, 8=11.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.755 issued rwts: total=2396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.755 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.755 filename0: (groupid=0, jobs=1): err= 0: pid=97945: Sun Dec 8 18:41:37 2024 00:24:21.755 read: IOPS=239, BW=958KiB/s (981kB/s)(9608KiB/10027msec) 00:24:21.755 slat (usec): min=6, max=7027, avg=27.70, stdev=244.84 00:24:21.755 clat (msec): min=5, max=128, avg=66.59, stdev=19.65 00:24:21.755 lat (msec): min=5, max=128, avg=66.62, stdev=19.65 00:24:21.755 clat percentiles (msec): 00:24:21.755 | 1.00th=[ 9], 5.00th=[ 39], 10.00th=[ 44], 20.00th=[ 50], 00:24:21.755 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 70], 00:24:21.755 | 70.00th=[ 73], 80.00th=[ 86], 90.00th=[ 96], 95.00th=[ 99], 00:24:21.755 | 99.00th=[ 106], 99.50th=[ 106], 99.90th=[ 121], 99.95th=[ 129], 00:24:21.755 | 99.99th=[ 129] 00:24:21.755 bw ( KiB/s): min= 736, max= 1344, per=4.28%, avg=957.00, stdev=165.16, samples=20 00:24:21.755 iops : min= 184, max= 336, avg=239.25, stdev=41.29, samples=20 00:24:21.755 lat (msec) : 10=1.17%, 20=0.75%, 50=18.44%, 100=75.90%, 250=3.75% 00:24:21.755 cpu : usr=40.83%, sys=1.62%, ctx=1352, majf=0, minf=0 00:24:21.755 IO depths : 1=0.1%, 2=0.4%, 4=1.3%, 8=81.8%, 16=16.5%, 32=0.0%, >=64=0.0% 00:24:21.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.755 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.755 issued rwts: total=2402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.755 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.755 filename0: (groupid=0, jobs=1): err= 0: pid=97946: Sun Dec 8 18:41:37 2024 00:24:21.755 read: IOPS=247, BW=988KiB/s (1012kB/s)(9892KiB/10011msec) 00:24:21.755 slat (usec): min=5, max=9056, avg=36.12, stdev=328.77 00:24:21.755 clat (msec): min=10, max=110, avg=64.62, stdev=19.22 00:24:21.755 lat (msec): min=10, max=110, avg=64.66, stdev=19.21 00:24:21.755 clat percentiles (msec): 00:24:21.755 | 1.00th=[ 26], 5.00th=[ 38], 10.00th=[ 41], 20.00th=[ 47], 00:24:21.755 | 30.00th=[ 54], 40.00th=[ 60], 50.00th=[ 63], 60.00th=[ 68], 00:24:21.755 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 97], 00:24:21.755 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 111], 99.95th=[ 111], 00:24:21.755 | 99.99th=[ 111] 00:24:21.755 bw ( KiB/s): min= 744, max= 1208, per=4.34%, avg=970.11, stdev=160.11, samples=19 00:24:21.755 iops : min= 186, max= 302, avg=242.53, stdev=40.03, samples=19 00:24:21.755 lat (msec) : 20=0.49%, 50=25.64%, 100=71.37%, 250=2.51% 00:24:21.755 cpu : usr=38.18%, sys=1.37%, ctx=1134, majf=0, minf=9 00:24:21.755 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.4%, 16=15.8%, 32=0.0%, >=64=0.0% 00:24:21.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.755 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.755 issued rwts: total=2473,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.755 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.755 filename0: (groupid=0, jobs=1): err= 0: pid=97947: Sun Dec 8 18:41:37 2024 00:24:21.755 read: IOPS=233, BW=935KiB/s (958kB/s)(9376KiB/10027msec) 00:24:21.755 slat (usec): min=4, max=8032, avg=28.83, stdev=258.55 00:24:21.755 clat (msec): min=24, max=126, avg=68.27, stdev=17.61 00:24:21.755 lat (msec): min=24, max=126, avg=68.29, stdev=17.61 00:24:21.755 clat percentiles (msec): 00:24:21.755 | 1.00th=[ 36], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 56], 00:24:21.755 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 70], 00:24:21.755 | 70.00th=[ 74], 80.00th=[ 86], 90.00th=[ 95], 95.00th=[ 97], 00:24:21.755 | 99.00th=[ 106], 99.50th=[ 108], 99.90th=[ 116], 99.95th=[ 121], 00:24:21.755 | 99.99th=[ 127] 00:24:21.755 bw ( KiB/s): min= 712, max= 1168, per=4.17%, avg=933.30, stdev=129.98, samples=20 00:24:21.755 iops : min= 178, max= 292, avg=233.30, stdev=32.49, samples=20 00:24:21.755 lat (msec) : 50=16.04%, 100=80.25%, 250=3.71% 00:24:21.755 cpu : usr=36.04%, sys=1.33%, ctx=1073, majf=0, minf=9 00:24:21.755 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.6%, 16=16.8%, 32=0.0%, >=64=0.0% 00:24:21.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.755 complete : 0=0.0%, 4=87.7%, 8=12.2%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.755 issued rwts: total=2344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.755 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.756 filename0: (groupid=0, jobs=1): err= 0: pid=97948: Sun Dec 8 18:41:37 2024 00:24:21.756 read: IOPS=241, BW=966KiB/s (989kB/s)(9676KiB/10018msec) 00:24:21.756 slat (usec): min=3, max=8029, avg=41.27, stdev=320.16 00:24:21.756 clat (msec): min=16, max=124, avg=66.05, stdev=18.83 00:24:21.756 lat (msec): min=16, max=124, avg=66.09, stdev=18.83 00:24:21.756 clat percentiles (msec): 00:24:21.756 | 1.00th=[ 34], 5.00th=[ 39], 10.00th=[ 42], 20.00th=[ 48], 00:24:21.756 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 69], 00:24:21.756 | 70.00th=[ 72], 80.00th=[ 85], 90.00th=[ 95], 95.00th=[ 100], 00:24:21.756 | 99.00th=[ 107], 99.50th=[ 109], 99.90th=[ 117], 99.95th=[ 125], 00:24:21.756 | 99.99th=[ 126] 00:24:21.756 bw ( KiB/s): min= 768, max= 1200, per=4.27%, avg=954.11, stdev=149.66, samples=19 00:24:21.756 iops : min= 192, max= 300, avg=238.53, stdev=37.42, samples=19 00:24:21.756 lat (msec) : 20=0.17%, 50=23.94%, 100=71.31%, 250=4.59% 00:24:21.756 cpu : usr=40.70%, sys=1.64%, ctx=1156, majf=0, minf=9 00:24:21.756 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.2%, 16=16.2%, 32=0.0%, >=64=0.0% 00:24:21.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.756 complete : 0=0.0%, 4=87.2%, 8=12.7%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.756 issued rwts: total=2419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.756 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.756 filename0: (groupid=0, jobs=1): err= 0: pid=97949: Sun Dec 8 18:41:37 2024 00:24:21.756 read: IOPS=235, BW=942KiB/s (964kB/s)(9448KiB/10031msec) 00:24:21.756 slat (usec): min=3, max=8035, avg=33.38, stdev=298.29 00:24:21.756 clat (msec): min=31, max=120, avg=67.77, stdev=18.07 00:24:21.756 lat (msec): min=31, max=120, avg=67.81, stdev=18.07 00:24:21.756 clat percentiles (msec): 00:24:21.756 | 1.00th=[ 36], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 52], 00:24:21.756 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 70], 00:24:21.756 | 70.00th=[ 72], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 100], 00:24:21.756 | 99.00th=[ 107], 99.50th=[ 108], 99.90th=[ 120], 99.95th=[ 121], 00:24:21.756 | 99.99th=[ 121] 00:24:21.756 bw ( KiB/s): min= 720, max= 1152, per=4.20%, avg=940.45, stdev=130.77, samples=20 00:24:21.756 iops : min= 180, max= 288, avg=235.10, stdev=32.70, samples=20 00:24:21.756 lat (msec) : 50=19.05%, 100=76.59%, 250=4.36% 00:24:21.756 cpu : usr=36.47%, sys=1.49%, ctx=1086, majf=0, minf=9 00:24:21.756 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.8%, 16=16.6%, 32=0.0%, >=64=0.0% 00:24:21.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.756 complete : 0=0.0%, 4=87.6%, 8=12.3%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.756 issued rwts: total=2362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.756 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.756 filename0: (groupid=0, jobs=1): err= 0: pid=97950: Sun Dec 8 18:41:37 2024 00:24:21.756 read: IOPS=233, BW=935KiB/s (958kB/s)(9380KiB/10029msec) 00:24:21.756 slat (usec): min=3, max=8048, avg=41.88, stdev=379.40 00:24:21.756 clat (msec): min=26, max=122, avg=68.21, stdev=20.91 00:24:21.756 lat (msec): min=26, max=122, avg=68.26, stdev=20.91 00:24:21.756 clat percentiles (msec): 00:24:21.756 | 1.00th=[ 33], 5.00th=[ 39], 10.00th=[ 42], 20.00th=[ 48], 00:24:21.756 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 70], 00:24:21.756 | 70.00th=[ 80], 80.00th=[ 91], 90.00th=[ 97], 95.00th=[ 104], 00:24:21.756 | 99.00th=[ 118], 99.50th=[ 123], 99.90th=[ 123], 99.95th=[ 123], 00:24:21.756 | 99.99th=[ 123] 00:24:21.756 bw ( KiB/s): min= 640, max= 1160, per=4.17%, avg=933.35, stdev=184.47, samples=20 00:24:21.756 iops : min= 160, max= 290, avg=233.30, stdev=46.16, samples=20 00:24:21.756 lat (msec) : 50=22.81%, 100=69.51%, 250=7.68% 00:24:21.756 cpu : usr=40.81%, sys=1.35%, ctx=1201, majf=0, minf=9 00:24:21.756 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=79.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:24:21.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.756 complete : 0=0.0%, 4=88.2%, 8=11.0%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.756 issued rwts: total=2345,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.756 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.756 filename1: (groupid=0, jobs=1): err= 0: pid=97951: Sun Dec 8 18:41:37 2024 00:24:21.756 read: IOPS=233, BW=933KiB/s (956kB/s)(9360KiB/10027msec) 00:24:21.756 slat (usec): min=6, max=8044, avg=35.66, stdev=340.96 00:24:21.756 clat (msec): min=25, max=130, avg=68.36, stdev=17.68 00:24:21.756 lat (msec): min=25, max=130, avg=68.39, stdev=17.68 00:24:21.756 clat percentiles (msec): 00:24:21.756 | 1.00th=[ 35], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 55], 00:24:21.756 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 71], 00:24:21.756 | 70.00th=[ 73], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 99], 00:24:21.756 | 99.00th=[ 108], 99.50th=[ 108], 99.90th=[ 109], 99.95th=[ 121], 00:24:21.756 | 99.99th=[ 131] 00:24:21.756 bw ( KiB/s): min= 736, max= 1152, per=4.17%, avg=932.05, stdev=132.37, samples=20 00:24:21.756 iops : min= 184, max= 288, avg=233.00, stdev=33.08, samples=20 00:24:21.756 lat (msec) : 50=17.44%, 100=78.38%, 250=4.19% 00:24:21.756 cpu : usr=34.60%, sys=1.59%, ctx=878, majf=0, minf=9 00:24:21.756 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.2%, 16=16.8%, 32=0.0%, >=64=0.0% 00:24:21.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.756 complete : 0=0.0%, 4=87.8%, 8=12.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.756 issued rwts: total=2340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.756 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.756 filename1: (groupid=0, jobs=1): err= 0: pid=97952: Sun Dec 8 18:41:37 2024 00:24:21.756 read: IOPS=212, BW=852KiB/s (872kB/s)(8552KiB/10039msec) 00:24:21.756 slat (usec): min=5, max=7997, avg=23.18, stdev=193.53 00:24:21.756 clat (msec): min=15, max=143, avg=74.87, stdev=21.94 00:24:21.756 lat (msec): min=15, max=144, avg=74.90, stdev=21.94 00:24:21.756 clat percentiles (msec): 00:24:21.756 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 59], 00:24:21.756 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 83], 00:24:21.756 | 70.00th=[ 92], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 111], 00:24:21.756 | 99.00th=[ 123], 99.50th=[ 123], 99.90th=[ 140], 99.95th=[ 144], 00:24:21.756 | 99.99th=[ 144] 00:24:21.756 bw ( KiB/s): min= 528, max= 1080, per=3.81%, avg=851.60, stdev=185.64, samples=20 00:24:21.756 iops : min= 132, max= 270, avg=212.90, stdev=46.41, samples=20 00:24:21.756 lat (msec) : 20=0.65%, 50=14.22%, 100=73.15%, 250=11.97% 00:24:21.756 cpu : usr=33.35%, sys=1.52%, ctx=911, majf=0, minf=9 00:24:21.756 IO depths : 1=0.1%, 2=2.9%, 4=11.6%, 8=70.2%, 16=15.3%, 32=0.0%, >=64=0.0% 00:24:21.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.756 complete : 0=0.0%, 4=91.0%, 8=6.4%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.756 issued rwts: total=2138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.756 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.756 filename1: (groupid=0, jobs=1): err= 0: pid=97953: Sun Dec 8 18:41:37 2024 00:24:21.756 read: IOPS=226, BW=907KiB/s (929kB/s)(9116KiB/10049msec) 00:24:21.756 slat (usec): min=6, max=4025, avg=24.32, stdev=149.22 00:24:21.756 clat (msec): min=3, max=142, avg=70.32, stdev=24.83 00:24:21.756 lat (msec): min=3, max=142, avg=70.34, stdev=24.83 00:24:21.756 clat percentiles (msec): 00:24:21.756 | 1.00th=[ 6], 5.00th=[ 37], 10.00th=[ 43], 20.00th=[ 51], 00:24:21.756 | 30.00th=[ 60], 40.00th=[ 62], 50.00th=[ 67], 60.00th=[ 72], 00:24:21.756 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 103], 95.00th=[ 116], 00:24:21.756 | 99.00th=[ 129], 99.50th=[ 130], 99.90th=[ 136], 99.95th=[ 142], 00:24:21.756 | 99.99th=[ 142] 00:24:21.756 bw ( KiB/s): min= 528, max= 1520, per=4.05%, avg=905.05, stdev=246.82, samples=20 00:24:21.756 iops : min= 132, max= 380, avg=226.25, stdev=61.72, samples=20 00:24:21.756 lat (msec) : 4=0.61%, 10=1.97%, 20=0.13%, 50=17.03%, 100=68.71% 00:24:21.756 lat (msec) : 250=11.54% 00:24:21.756 cpu : usr=41.65%, sys=1.66%, ctx=1183, majf=0, minf=0 00:24:21.756 IO depths : 1=0.1%, 2=2.1%, 4=8.0%, 8=74.3%, 16=15.4%, 32=0.0%, >=64=0.0% 00:24:21.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.756 complete : 0=0.0%, 4=89.8%, 8=8.5%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.756 issued rwts: total=2279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.756 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.756 filename1: (groupid=0, jobs=1): err= 0: pid=97954: Sun Dec 8 18:41:37 2024 00:24:21.756 read: IOPS=228, BW=915KiB/s (937kB/s)(9168KiB/10017msec) 00:24:21.756 slat (usec): min=4, max=11975, avg=36.30, stdev=382.89 00:24:21.756 clat (msec): min=16, max=137, avg=69.75, stdev=21.51 00:24:21.756 lat (msec): min=16, max=137, avg=69.78, stdev=21.50 00:24:21.756 clat percentiles (msec): 00:24:21.756 | 1.00th=[ 32], 5.00th=[ 38], 10.00th=[ 46], 20.00th=[ 50], 00:24:21.756 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 71], 00:24:21.756 | 70.00th=[ 82], 80.00th=[ 93], 90.00th=[ 100], 95.00th=[ 108], 00:24:21.756 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 124], 99.95th=[ 138], 00:24:21.756 | 99.99th=[ 138] 00:24:21.756 bw ( KiB/s): min= 528, max= 1152, per=4.02%, avg=898.95, stdev=189.54, samples=19 00:24:21.756 iops : min= 132, max= 288, avg=224.74, stdev=47.39, samples=19 00:24:21.756 lat (msec) : 20=0.48%, 50=20.24%, 100=70.51%, 250=8.77% 00:24:21.756 cpu : usr=32.74%, sys=1.06%, ctx=998, majf=0, minf=9 00:24:21.756 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=79.1%, 16=16.0%, 32=0.0%, >=64=0.0% 00:24:21.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.756 complete : 0=0.0%, 4=88.5%, 8=10.6%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.756 issued rwts: total=2292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.756 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.756 filename1: (groupid=0, jobs=1): err= 0: pid=97955: Sun Dec 8 18:41:37 2024 00:24:21.756 read: IOPS=244, BW=977KiB/s (1001kB/s)(9776KiB/10003msec) 00:24:21.756 slat (usec): min=4, max=8028, avg=24.15, stdev=162.29 00:24:21.756 clat (msec): min=2, max=155, avg=65.37, stdev=20.88 00:24:21.756 lat (msec): min=2, max=155, avg=65.40, stdev=20.88 00:24:21.756 clat percentiles (msec): 00:24:21.756 | 1.00th=[ 22], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 47], 00:24:21.756 | 30.00th=[ 54], 40.00th=[ 60], 50.00th=[ 63], 60.00th=[ 68], 00:24:21.756 | 70.00th=[ 72], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 102], 00:24:21.756 | 99.00th=[ 108], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 157], 00:24:21.756 | 99.99th=[ 157] 00:24:21.756 bw ( KiB/s): min= 652, max= 1176, per=4.27%, avg=955.16, stdev=177.57, samples=19 00:24:21.756 iops : min= 163, max= 294, avg=238.79, stdev=44.39, samples=19 00:24:21.757 lat (msec) : 4=0.29%, 10=0.37%, 20=0.25%, 50=26.35%, 100=67.39% 00:24:21.757 lat (msec) : 250=5.36% 00:24:21.757 cpu : usr=35.70%, sys=1.25%, ctx=1075, majf=0, minf=9 00:24:21.757 IO depths : 1=0.1%, 2=0.3%, 4=1.4%, 8=82.4%, 16=15.8%, 32=0.0%, >=64=0.0% 00:24:21.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.757 complete : 0=0.0%, 4=87.3%, 8=12.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.757 issued rwts: total=2444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.757 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.757 filename1: (groupid=0, jobs=1): err= 0: pid=97956: Sun Dec 8 18:41:37 2024 00:24:21.757 read: IOPS=230, BW=921KiB/s (943kB/s)(9216KiB/10010msec) 00:24:21.757 slat (usec): min=4, max=8029, avg=35.54, stdev=258.47 00:24:21.757 clat (msec): min=10, max=129, avg=69.34, stdev=22.28 00:24:21.757 lat (msec): min=11, max=129, avg=69.38, stdev=22.28 00:24:21.757 clat percentiles (msec): 00:24:21.757 | 1.00th=[ 28], 5.00th=[ 39], 10.00th=[ 42], 20.00th=[ 48], 00:24:21.757 | 30.00th=[ 57], 40.00th=[ 63], 50.00th=[ 66], 60.00th=[ 71], 00:24:21.757 | 70.00th=[ 85], 80.00th=[ 93], 90.00th=[ 99], 95.00th=[ 105], 00:24:21.757 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 127], 99.95th=[ 130], 00:24:21.757 | 99.99th=[ 130] 00:24:21.757 bw ( KiB/s): min= 528, max= 1208, per=4.02%, avg=898.53, stdev=211.34, samples=19 00:24:21.757 iops : min= 132, max= 302, avg=224.63, stdev=52.84, samples=19 00:24:21.757 lat (msec) : 20=0.13%, 50=24.39%, 100=66.62%, 250=8.85% 00:24:21.757 cpu : usr=42.96%, sys=1.48%, ctx=1347, majf=0, minf=9 00:24:21.757 IO depths : 1=0.1%, 2=1.6%, 4=6.6%, 8=76.5%, 16=15.3%, 32=0.0%, >=64=0.0% 00:24:21.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.757 complete : 0=0.0%, 4=89.0%, 8=9.6%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.757 issued rwts: total=2304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.757 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.757 filename1: (groupid=0, jobs=1): err= 0: pid=97957: Sun Dec 8 18:41:37 2024 00:24:21.757 read: IOPS=245, BW=980KiB/s (1004kB/s)(9812KiB/10008msec) 00:24:21.757 slat (usec): min=4, max=8043, avg=30.80, stdev=252.69 00:24:21.757 clat (msec): min=10, max=118, avg=65.14, stdev=19.70 00:24:21.757 lat (msec): min=10, max=118, avg=65.17, stdev=19.69 00:24:21.757 clat percentiles (msec): 00:24:21.757 | 1.00th=[ 26], 5.00th=[ 37], 10.00th=[ 42], 20.00th=[ 46], 00:24:21.757 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 68], 00:24:21.757 | 70.00th=[ 72], 80.00th=[ 85], 90.00th=[ 95], 95.00th=[ 97], 00:24:21.757 | 99.00th=[ 107], 99.50th=[ 108], 99.90th=[ 111], 99.95th=[ 118], 00:24:21.757 | 99.99th=[ 118] 00:24:21.757 bw ( KiB/s): min= 720, max= 1232, per=4.29%, avg=960.00, stdev=172.45, samples=19 00:24:21.757 iops : min= 180, max= 308, avg=240.00, stdev=43.11, samples=19 00:24:21.757 lat (msec) : 20=0.49%, 50=26.21%, 100=69.38%, 250=3.91% 00:24:21.757 cpu : usr=37.63%, sys=1.29%, ctx=1050, majf=0, minf=9 00:24:21.757 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=82.3%, 16=15.7%, 32=0.0%, >=64=0.0% 00:24:21.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.757 complete : 0=0.0%, 4=87.2%, 8=12.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.757 issued rwts: total=2453,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.757 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.757 filename1: (groupid=0, jobs=1): err= 0: pid=97958: Sun Dec 8 18:41:37 2024 00:24:21.757 read: IOPS=234, BW=939KiB/s (962kB/s)(9420KiB/10032msec) 00:24:21.757 slat (usec): min=6, max=8045, avg=25.80, stdev=171.18 00:24:21.757 clat (msec): min=13, max=126, avg=67.99, stdev=18.21 00:24:21.757 lat (msec): min=13, max=126, avg=68.01, stdev=18.21 00:24:21.757 clat percentiles (msec): 00:24:21.757 | 1.00th=[ 27], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 54], 00:24:21.757 | 30.00th=[ 61], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 71], 00:24:21.757 | 70.00th=[ 72], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 100], 00:24:21.757 | 99.00th=[ 107], 99.50th=[ 108], 99.90th=[ 121], 99.95th=[ 121], 00:24:21.757 | 99.99th=[ 127] 00:24:21.757 bw ( KiB/s): min= 768, max= 1176, per=4.18%, avg=935.60, stdev=131.11, samples=20 00:24:21.757 iops : min= 192, max= 294, avg=233.90, stdev=32.78, samples=20 00:24:21.757 lat (msec) : 20=0.59%, 50=16.65%, 100=78.43%, 250=4.33% 00:24:21.757 cpu : usr=35.90%, sys=1.42%, ctx=981, majf=0, minf=9 00:24:21.757 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.5%, 16=16.9%, 32=0.0%, >=64=0.0% 00:24:21.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.757 complete : 0=0.0%, 4=87.8%, 8=12.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.757 issued rwts: total=2355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.757 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.757 filename2: (groupid=0, jobs=1): err= 0: pid=97959: Sun Dec 8 18:41:37 2024 00:24:21.757 read: IOPS=225, BW=900KiB/s (922kB/s)(9020KiB/10022msec) 00:24:21.757 slat (usec): min=3, max=4039, avg=24.71, stdev=137.45 00:24:21.757 clat (msec): min=28, max=129, avg=70.95, stdev=20.37 00:24:21.757 lat (msec): min=28, max=129, avg=70.98, stdev=20.37 00:24:21.757 clat percentiles (msec): 00:24:21.757 | 1.00th=[ 36], 5.00th=[ 41], 10.00th=[ 45], 20.00th=[ 52], 00:24:21.757 | 30.00th=[ 59], 40.00th=[ 64], 50.00th=[ 68], 60.00th=[ 73], 00:24:21.757 | 70.00th=[ 86], 80.00th=[ 94], 90.00th=[ 99], 95.00th=[ 103], 00:24:21.757 | 99.00th=[ 115], 99.50th=[ 117], 99.90th=[ 130], 99.95th=[ 130], 00:24:21.757 | 99.99th=[ 130] 00:24:21.757 bw ( KiB/s): min= 640, max= 1176, per=4.00%, avg=895.65, stdev=181.08, samples=20 00:24:21.757 iops : min= 160, max= 294, avg=223.90, stdev=45.28, samples=20 00:24:21.757 lat (msec) : 50=18.09%, 100=74.28%, 250=7.63% 00:24:21.757 cpu : usr=42.34%, sys=1.84%, ctx=1505, majf=0, minf=9 00:24:21.757 IO depths : 1=0.1%, 2=1.8%, 4=7.2%, 8=75.4%, 16=15.5%, 32=0.0%, >=64=0.0% 00:24:21.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.757 complete : 0=0.0%, 4=89.4%, 8=9.0%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.757 issued rwts: total=2255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.757 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.757 filename2: (groupid=0, jobs=1): err= 0: pid=97960: Sun Dec 8 18:41:37 2024 00:24:21.757 read: IOPS=236, BW=946KiB/s (969kB/s)(9492KiB/10030msec) 00:24:21.757 slat (usec): min=6, max=8033, avg=30.04, stdev=296.35 00:24:21.757 clat (msec): min=7, max=128, avg=67.45, stdev=20.35 00:24:21.757 lat (msec): min=7, max=128, avg=67.48, stdev=20.36 00:24:21.757 clat percentiles (msec): 00:24:21.757 | 1.00th=[ 8], 5.00th=[ 36], 10.00th=[ 44], 20.00th=[ 52], 00:24:21.757 | 30.00th=[ 60], 40.00th=[ 62], 50.00th=[ 66], 60.00th=[ 71], 00:24:21.757 | 70.00th=[ 78], 80.00th=[ 88], 90.00th=[ 96], 95.00th=[ 100], 00:24:21.757 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 126], 99.95th=[ 128], 00:24:21.757 | 99.99th=[ 129] 00:24:21.757 bw ( KiB/s): min= 704, max= 1496, per=4.22%, avg=944.85, stdev=188.96, samples=20 00:24:21.757 iops : min= 176, max= 374, avg=236.20, stdev=47.25, samples=20 00:24:21.757 lat (msec) : 10=1.35%, 20=0.67%, 50=16.60%, 100=76.78%, 250=4.59% 00:24:21.757 cpu : usr=34.21%, sys=1.20%, ctx=991, majf=0, minf=9 00:24:21.757 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=81.8%, 16=16.8%, 32=0.0%, >=64=0.0% 00:24:21.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.757 complete : 0=0.0%, 4=88.0%, 8=11.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.757 issued rwts: total=2373,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.757 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.757 filename2: (groupid=0, jobs=1): err= 0: pid=97961: Sun Dec 8 18:41:37 2024 00:24:21.757 read: IOPS=222, BW=888KiB/s (909kB/s)(8896KiB/10016msec) 00:24:21.757 slat (usec): min=3, max=9019, avg=40.07, stdev=378.34 00:24:21.757 clat (msec): min=25, max=136, avg=71.80, stdev=24.43 00:24:21.757 lat (msec): min=25, max=136, avg=71.84, stdev=24.43 00:24:21.757 clat percentiles (msec): 00:24:21.757 | 1.00th=[ 34], 5.00th=[ 38], 10.00th=[ 41], 20.00th=[ 48], 00:24:21.757 | 30.00th=[ 57], 40.00th=[ 63], 50.00th=[ 68], 60.00th=[ 78], 00:24:21.757 | 70.00th=[ 87], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 116], 00:24:21.757 | 99.00th=[ 131], 99.50th=[ 133], 99.90th=[ 136], 99.95th=[ 138], 00:24:21.757 | 99.99th=[ 138] 00:24:21.757 bw ( KiB/s): min= 528, max= 1184, per=3.90%, avg=872.00, stdev=237.59, samples=19 00:24:21.757 iops : min= 132, max= 296, avg=218.00, stdev=59.40, samples=19 00:24:21.757 lat (msec) : 50=23.88%, 100=62.90%, 250=13.22% 00:24:21.757 cpu : usr=40.96%, sys=1.23%, ctx=1207, majf=0, minf=9 00:24:21.757 IO depths : 1=0.1%, 2=2.5%, 4=9.8%, 8=72.9%, 16=14.7%, 32=0.0%, >=64=0.0% 00:24:21.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.757 complete : 0=0.0%, 4=89.9%, 8=8.0%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.757 issued rwts: total=2224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.757 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.757 filename2: (groupid=0, jobs=1): err= 0: pid=97962: Sun Dec 8 18:41:37 2024 00:24:21.757 read: IOPS=231, BW=928KiB/s (950kB/s)(9308KiB/10034msec) 00:24:21.757 slat (usec): min=3, max=8014, avg=39.70, stdev=341.19 00:24:21.757 clat (msec): min=12, max=133, avg=68.71, stdev=21.05 00:24:21.757 lat (msec): min=12, max=133, avg=68.75, stdev=21.06 00:24:21.757 clat percentiles (msec): 00:24:21.757 | 1.00th=[ 34], 5.00th=[ 39], 10.00th=[ 43], 20.00th=[ 51], 00:24:21.757 | 30.00th=[ 57], 40.00th=[ 62], 50.00th=[ 66], 60.00th=[ 70], 00:24:21.757 | 70.00th=[ 79], 80.00th=[ 91], 90.00th=[ 99], 95.00th=[ 105], 00:24:21.757 | 99.00th=[ 123], 99.50th=[ 124], 99.90th=[ 130], 99.95th=[ 134], 00:24:21.757 | 99.99th=[ 134] 00:24:21.757 bw ( KiB/s): min= 640, max= 1136, per=4.14%, avg=926.80, stdev=183.90, samples=20 00:24:21.757 iops : min= 160, max= 284, avg=231.70, stdev=45.98, samples=20 00:24:21.757 lat (msec) : 20=0.60%, 50=18.65%, 100=73.36%, 250=7.39% 00:24:21.757 cpu : usr=44.36%, sys=1.65%, ctx=1501, majf=0, minf=9 00:24:21.757 IO depths : 1=0.1%, 2=1.2%, 4=4.6%, 8=78.3%, 16=15.9%, 32=0.0%, >=64=0.0% 00:24:21.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.757 complete : 0=0.0%, 4=88.7%, 8=10.3%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.757 issued rwts: total=2327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.757 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.757 filename2: (groupid=0, jobs=1): err= 0: pid=97963: Sun Dec 8 18:41:37 2024 00:24:21.757 read: IOPS=224, BW=898KiB/s (919kB/s)(9000KiB/10026msec) 00:24:21.757 slat (usec): min=6, max=8047, avg=41.99, stdev=375.08 00:24:21.757 clat (msec): min=27, max=131, avg=71.03, stdev=21.14 00:24:21.757 lat (msec): min=27, max=131, avg=71.07, stdev=21.14 00:24:21.757 clat percentiles (msec): 00:24:21.758 | 1.00th=[ 36], 5.00th=[ 41], 10.00th=[ 44], 20.00th=[ 52], 00:24:21.758 | 30.00th=[ 59], 40.00th=[ 63], 50.00th=[ 68], 60.00th=[ 72], 00:24:21.758 | 70.00th=[ 85], 80.00th=[ 94], 90.00th=[ 99], 95.00th=[ 107], 00:24:21.758 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 129], 99.95th=[ 131], 00:24:21.758 | 99.99th=[ 131] 00:24:21.758 bw ( KiB/s): min= 640, max= 1144, per=4.01%, avg=896.00, stdev=195.80, samples=20 00:24:21.758 iops : min= 160, max= 286, avg=224.00, stdev=48.95, samples=20 00:24:21.758 lat (msec) : 50=19.24%, 100=72.53%, 250=8.22% 00:24:21.758 cpu : usr=37.31%, sys=1.31%, ctx=1012, majf=0, minf=9 00:24:21.758 IO depths : 1=0.1%, 2=1.7%, 4=6.9%, 8=75.8%, 16=15.5%, 32=0.0%, >=64=0.0% 00:24:21.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.758 complete : 0=0.0%, 4=89.3%, 8=9.2%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.758 issued rwts: total=2250,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.758 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.758 filename2: (groupid=0, jobs=1): err= 0: pid=97964: Sun Dec 8 18:41:37 2024 00:24:21.758 read: IOPS=236, BW=946KiB/s (969kB/s)(9484KiB/10024msec) 00:24:21.758 slat (usec): min=4, max=8031, avg=24.70, stdev=164.85 00:24:21.758 clat (msec): min=26, max=133, avg=67.48, stdev=19.70 00:24:21.758 lat (msec): min=26, max=133, avg=67.50, stdev=19.70 00:24:21.758 clat percentiles (msec): 00:24:21.758 | 1.00th=[ 35], 5.00th=[ 37], 10.00th=[ 45], 20.00th=[ 48], 00:24:21.758 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 71], 00:24:21.758 | 70.00th=[ 72], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 103], 00:24:21.758 | 99.00th=[ 113], 99.50th=[ 122], 99.90th=[ 125], 99.95th=[ 133], 00:24:21.758 | 99.99th=[ 133] 00:24:21.758 bw ( KiB/s): min= 720, max= 1176, per=4.22%, avg=944.15, stdev=168.86, samples=20 00:24:21.758 iops : min= 180, max= 294, avg=236.00, stdev=42.26, samples=20 00:24:21.758 lat (msec) : 50=24.17%, 100=69.80%, 250=6.03% 00:24:21.758 cpu : usr=31.74%, sys=1.37%, ctx=1048, majf=0, minf=9 00:24:21.758 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=81.8%, 16=16.1%, 32=0.0%, >=64=0.0% 00:24:21.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.758 complete : 0=0.0%, 4=87.7%, 8=12.0%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.758 issued rwts: total=2371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.758 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.758 filename2: (groupid=0, jobs=1): err= 0: pid=97965: Sun Dec 8 18:41:37 2024 00:24:21.758 read: IOPS=239, BW=956KiB/s (979kB/s)(9580KiB/10017msec) 00:24:21.758 slat (usec): min=3, max=8040, avg=45.77, stdev=397.35 00:24:21.758 clat (msec): min=19, max=118, avg=66.71, stdev=18.89 00:24:21.758 lat (msec): min=19, max=118, avg=66.76, stdev=18.89 00:24:21.758 clat percentiles (msec): 00:24:21.758 | 1.00th=[ 28], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 48], 00:24:21.758 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 70], 00:24:21.758 | 70.00th=[ 72], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 99], 00:24:21.758 | 99.00th=[ 109], 99.50th=[ 109], 99.90th=[ 115], 99.95th=[ 120], 00:24:21.758 | 99.99th=[ 120] 00:24:21.758 bw ( KiB/s): min= 720, max= 1208, per=4.20%, avg=938.11, stdev=152.86, samples=19 00:24:21.758 iops : min= 180, max= 302, avg=234.53, stdev=38.21, samples=19 00:24:21.758 lat (msec) : 20=0.29%, 50=21.67%, 100=74.03%, 250=4.01% 00:24:21.758 cpu : usr=37.56%, sys=1.36%, ctx=1044, majf=0, minf=9 00:24:21.758 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=81.9%, 16=16.0%, 32=0.0%, >=64=0.0% 00:24:21.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.758 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.758 issued rwts: total=2395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.758 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.758 filename2: (groupid=0, jobs=1): err= 0: pid=97966: Sun Dec 8 18:41:37 2024 00:24:21.758 read: IOPS=228, BW=915KiB/s (937kB/s)(9176KiB/10032msec) 00:24:21.758 slat (usec): min=3, max=8023, avg=24.30, stdev=171.82 00:24:21.758 clat (msec): min=32, max=121, avg=69.78, stdev=18.56 00:24:21.758 lat (msec): min=32, max=121, avg=69.81, stdev=18.56 00:24:21.758 clat percentiles (msec): 00:24:21.758 | 1.00th=[ 36], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 55], 00:24:21.758 | 30.00th=[ 60], 40.00th=[ 63], 50.00th=[ 67], 60.00th=[ 72], 00:24:21.758 | 70.00th=[ 80], 80.00th=[ 87], 90.00th=[ 96], 95.00th=[ 103], 00:24:21.758 | 99.00th=[ 109], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:24:21.758 | 99.99th=[ 123] 00:24:21.758 bw ( KiB/s): min= 704, max= 1144, per=4.09%, avg=914.00, stdev=147.26, samples=20 00:24:21.758 iops : min= 176, max= 286, avg=228.50, stdev=36.81, samples=20 00:24:21.758 lat (msec) : 50=15.43%, 100=78.38%, 250=6.19% 00:24:21.758 cpu : usr=37.48%, sys=1.21%, ctx=1383, majf=0, minf=9 00:24:21.758 IO depths : 1=0.1%, 2=0.4%, 4=1.9%, 8=80.9%, 16=16.8%, 32=0.0%, >=64=0.0% 00:24:21.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.758 complete : 0=0.0%, 4=88.3%, 8=11.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.758 issued rwts: total=2294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.758 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.758 00:24:21.758 Run status group 0 (all jobs): 00:24:21.758 READ: bw=21.8MiB/s (22.9MB/s), 852KiB/s-988KiB/s (872kB/s-1012kB/s), io=219MiB (230MB), run=10003-10049msec 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.758 bdev_null0 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.758 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.759 [2024-12-08 18:41:37.759885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.759 bdev_null1 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:21.759 { 00:24:21.759 "params": { 00:24:21.759 "name": "Nvme$subsystem", 00:24:21.759 "trtype": "$TEST_TRANSPORT", 00:24:21.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.759 "adrfam": "ipv4", 00:24:21.759 "trsvcid": "$NVMF_PORT", 00:24:21.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.759 "hdgst": ${hdgst:-false}, 00:24:21.759 "ddgst": ${ddgst:-false} 00:24:21.759 }, 00:24:21.759 "method": "bdev_nvme_attach_controller" 00:24:21.759 } 00:24:21.759 EOF 00:24:21.759 )") 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:21.759 { 00:24:21.759 "params": { 00:24:21.759 "name": "Nvme$subsystem", 00:24:21.759 "trtype": "$TEST_TRANSPORT", 00:24:21.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.759 "adrfam": "ipv4", 00:24:21.759 "trsvcid": "$NVMF_PORT", 00:24:21.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.759 "hdgst": ${hdgst:-false}, 00:24:21.759 "ddgst": ${ddgst:-false} 00:24:21.759 }, 00:24:21.759 "method": "bdev_nvme_attach_controller" 00:24:21.759 } 00:24:21.759 EOF 00:24:21.759 )") 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:21.759 "params": { 00:24:21.759 "name": "Nvme0", 00:24:21.759 "trtype": "tcp", 00:24:21.759 "traddr": "10.0.0.3", 00:24:21.759 "adrfam": "ipv4", 00:24:21.759 "trsvcid": "4420", 00:24:21.759 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:21.759 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:21.759 "hdgst": false, 00:24:21.759 "ddgst": false 00:24:21.759 }, 00:24:21.759 "method": "bdev_nvme_attach_controller" 00:24:21.759 },{ 00:24:21.759 "params": { 00:24:21.759 "name": "Nvme1", 00:24:21.759 "trtype": "tcp", 00:24:21.759 "traddr": "10.0.0.3", 00:24:21.759 "adrfam": "ipv4", 00:24:21.759 "trsvcid": "4420", 00:24:21.759 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.759 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:21.759 "hdgst": false, 00:24:21.759 "ddgst": false 00:24:21.759 }, 00:24:21.759 "method": "bdev_nvme_attach_controller" 00:24:21.759 }' 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:21.759 18:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:21.759 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:21.759 ... 00:24:21.759 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:21.759 ... 00:24:21.759 fio-3.35 00:24:21.759 Starting 4 threads 00:24:25.951 00:24:25.951 filename0: (groupid=0, jobs=1): err= 0: pid=98100: Sun Dec 8 18:41:43 2024 00:24:25.951 read: IOPS=2277, BW=17.8MiB/s (18.7MB/s)(89.0MiB/5004msec) 00:24:25.951 slat (nsec): min=5894, max=96713, avg=18522.44, stdev=10659.75 00:24:25.951 clat (usec): min=699, max=14541, avg=3456.53, stdev=925.18 00:24:25.951 lat (usec): min=711, max=14575, avg=3475.05, stdev=925.83 00:24:25.951 clat percentiles (usec): 00:24:25.951 | 1.00th=[ 1532], 5.00th=[ 1991], 10.00th=[ 2147], 20.00th=[ 2376], 00:24:25.951 | 30.00th=[ 2900], 40.00th=[ 3458], 50.00th=[ 3785], 60.00th=[ 3949], 00:24:25.951 | 70.00th=[ 4047], 80.00th=[ 4146], 90.00th=[ 4293], 95.00th=[ 4490], 00:24:25.951 | 99.00th=[ 5735], 99.50th=[ 6259], 99.90th=[ 7963], 99.95th=[10290], 00:24:25.951 | 99.99th=[10421] 00:24:25.951 bw ( KiB/s): min=15056, max=20239, per=24.91%, avg=18177.67, stdev=2078.42, samples=9 00:24:25.951 iops : min= 1882, max= 2529, avg=2272.11, stdev=259.69, samples=9 00:24:25.951 lat (usec) : 750=0.01%, 1000=0.08% 00:24:25.951 lat (msec) : 2=5.24%, 4=60.79%, 10=33.82%, 20=0.07% 00:24:25.951 cpu : usr=95.08%, sys=4.08%, ctx=8, majf=0, minf=9 00:24:25.951 IO depths : 1=0.9%, 2=8.7%, 4=59.2%, 8=31.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:25.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.951 complete : 0=0.0%, 4=96.7%, 8=3.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.951 issued rwts: total=11397,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.951 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:25.951 filename0: (groupid=0, jobs=1): err= 0: pid=98101: Sun Dec 8 18:41:43 2024 00:24:25.951 read: IOPS=2332, BW=18.2MiB/s (19.1MB/s)(91.1MiB/5001msec) 00:24:25.951 slat (nsec): min=3479, max=96499, avg=20434.59, stdev=10226.25 00:24:25.952 clat (usec): min=286, max=14637, avg=3372.01, stdev=1032.91 00:24:25.952 lat (usec): min=294, max=14671, avg=3392.44, stdev=1032.97 00:24:25.952 clat percentiles (usec): 00:24:25.952 | 1.00th=[ 1074], 5.00th=[ 1876], 10.00th=[ 1975], 20.00th=[ 2278], 00:24:25.952 | 30.00th=[ 2638], 40.00th=[ 3228], 50.00th=[ 3556], 60.00th=[ 3851], 00:24:25.952 | 70.00th=[ 4047], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4621], 00:24:25.952 | 99.00th=[ 6063], 99.50th=[ 6456], 99.90th=[ 7963], 99.95th=[10421], 00:24:25.952 | 99.99th=[10421] 00:24:25.952 bw ( KiB/s): min=15328, max=21648, per=25.23%, avg=18408.11, stdev=1973.40, samples=9 00:24:25.952 iops : min= 1916, max= 2706, avg=2301.00, stdev=246.67, samples=9 00:24:25.952 lat (usec) : 500=0.02%, 750=0.03%, 1000=0.38% 00:24:25.952 lat (msec) : 2=10.26%, 4=56.25%, 10=33.00%, 20=0.07% 00:24:25.952 cpu : usr=95.10%, sys=4.10%, ctx=12, majf=0, minf=0 00:24:25.952 IO depths : 1=1.0%, 2=6.9%, 4=60.1%, 8=31.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:25.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.952 complete : 0=0.0%, 4=97.3%, 8=2.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.952 issued rwts: total=11664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.952 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:25.952 filename1: (groupid=0, jobs=1): err= 0: pid=98102: Sun Dec 8 18:41:43 2024 00:24:25.952 read: IOPS=2254, BW=17.6MiB/s (18.5MB/s)(88.1MiB/5002msec) 00:24:25.952 slat (nsec): min=6067, max=91886, avg=20957.41, stdev=11063.28 00:24:25.952 clat (usec): min=628, max=9835, avg=3483.76, stdev=984.33 00:24:25.952 lat (usec): min=641, max=9856, avg=3504.72, stdev=983.97 00:24:25.952 clat percentiles (usec): 00:24:25.952 | 1.00th=[ 1369], 5.00th=[ 1876], 10.00th=[ 1975], 20.00th=[ 2376], 00:24:25.952 | 30.00th=[ 3064], 40.00th=[ 3425], 50.00th=[ 3720], 60.00th=[ 3884], 00:24:25.952 | 70.00th=[ 4146], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4621], 00:24:25.952 | 99.00th=[ 6128], 99.50th=[ 6456], 99.90th=[ 7111], 99.95th=[ 8455], 00:24:25.952 | 99.99th=[ 9765] 00:24:25.952 bw ( KiB/s): min=14912, max=21648, per=24.96%, avg=18209.00, stdev=1881.27, samples=9 00:24:25.952 iops : min= 1864, max= 2706, avg=2276.00, stdev=235.23, samples=9 00:24:25.952 lat (usec) : 750=0.01%, 1000=0.12% 00:24:25.952 lat (msec) : 2=10.40%, 4=54.40%, 10=35.07% 00:24:25.952 cpu : usr=94.64%, sys=4.62%, ctx=9, majf=0, minf=9 00:24:25.952 IO depths : 1=1.2%, 2=9.5%, 4=58.7%, 8=30.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:25.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.952 complete : 0=0.0%, 4=96.4%, 8=3.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.952 issued rwts: total=11279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.952 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:25.952 filename1: (groupid=0, jobs=1): err= 0: pid=98103: Sun Dec 8 18:41:43 2024 00:24:25.952 read: IOPS=2259, BW=17.7MiB/s (18.5MB/s)(88.3MiB/5001msec) 00:24:25.952 slat (usec): min=6, max=173, avg=22.28, stdev=11.32 00:24:25.952 clat (usec): min=443, max=9748, avg=3470.49, stdev=1018.69 00:24:25.952 lat (usec): min=454, max=9765, avg=3492.77, stdev=1017.93 00:24:25.952 clat percentiles (usec): 00:24:25.952 | 1.00th=[ 1483], 5.00th=[ 1909], 10.00th=[ 2147], 20.00th=[ 2343], 00:24:25.952 | 30.00th=[ 2868], 40.00th=[ 3392], 50.00th=[ 3785], 60.00th=[ 3949], 00:24:25.952 | 70.00th=[ 4015], 80.00th=[ 4146], 90.00th=[ 4359], 95.00th=[ 4621], 00:24:25.952 | 99.00th=[ 6783], 99.50th=[ 7898], 99.90th=[ 8356], 99.95th=[ 8455], 00:24:25.952 | 99.99th=[ 9503] 00:24:25.952 bw ( KiB/s): min=13984, max=19744, per=24.68%, avg=18011.11, stdev=1960.09, samples=9 00:24:25.952 iops : min= 1748, max= 2468, avg=2251.33, stdev=245.00, samples=9 00:24:25.952 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.02% 00:24:25.952 lat (msec) : 2=6.04%, 4=60.45%, 10=33.46% 00:24:25.952 cpu : usr=94.14%, sys=4.74%, ctx=66, majf=0, minf=9 00:24:25.952 IO depths : 1=1.2%, 2=9.3%, 4=58.8%, 8=30.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:25.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.952 complete : 0=0.0%, 4=96.4%, 8=3.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.952 issued rwts: total=11301,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.952 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:25.952 00:24:25.952 Run status group 0 (all jobs): 00:24:25.952 READ: bw=71.3MiB/s (74.7MB/s), 17.6MiB/s-18.2MiB/s (18.5MB/s-19.1MB/s), io=357MiB (374MB), run=5001-5004msec 00:24:25.952 18:41:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:24:25.952 18:41:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:25.952 18:41:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:25.952 18:41:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:25.952 18:41:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:25.952 18:41:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:25.952 18:41:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.952 18:41:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:25.952 18:41:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.952 18:41:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:25.952 18:41:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.952 18:41:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:25.952 18:41:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.952 18:41:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:25.952 18:41:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:25.952 18:41:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:25.952 18:41:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:25.952 18:41:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.952 18:41:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:25.952 18:41:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.952 18:41:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:25.952 18:41:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.952 18:41:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:25.952 ************************************ 00:24:25.952 END TEST fio_dif_rand_params 00:24:25.952 ************************************ 00:24:25.952 18:41:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.952 00:24:25.952 real 0m23.371s 00:24:25.952 user 2m6.195s 00:24:25.952 sys 0m6.119s 00:24:25.952 18:41:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:25.952 18:41:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:26.212 18:41:43 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:24:26.212 18:41:43 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:26.212 18:41:43 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:26.212 18:41:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:26.212 ************************************ 00:24:26.212 START TEST fio_dif_digest 00:24:26.212 ************************************ 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:26.212 bdev_null0 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:26.212 [2024-12-08 18:41:43.957111] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # config=() 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # local subsystem config 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:26.212 { 00:24:26.212 "params": { 00:24:26.212 "name": "Nvme$subsystem", 00:24:26.212 "trtype": "$TEST_TRANSPORT", 00:24:26.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.212 "adrfam": "ipv4", 00:24:26.212 "trsvcid": "$NVMF_PORT", 00:24:26.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.212 "hdgst": ${hdgst:-false}, 00:24:26.212 "ddgst": ${ddgst:-false} 00:24:26.212 }, 00:24:26.212 "method": "bdev_nvme_attach_controller" 00:24:26.212 } 00:24:26.212 EOF 00:24:26.212 )") 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # cat 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # jq . 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@581 -- # IFS=, 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:26.212 "params": { 00:24:26.212 "name": "Nvme0", 00:24:26.212 "trtype": "tcp", 00:24:26.212 "traddr": "10.0.0.3", 00:24:26.212 "adrfam": "ipv4", 00:24:26.212 "trsvcid": "4420", 00:24:26.212 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:26.212 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:26.212 "hdgst": true, 00:24:26.212 "ddgst": true 00:24:26.212 }, 00:24:26.212 "method": "bdev_nvme_attach_controller" 00:24:26.212 }' 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:26.212 18:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:26.212 18:41:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:26.212 18:41:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:26.212 18:41:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:26.212 18:41:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:26.212 18:41:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:26.212 18:41:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:26.212 18:41:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:26.212 18:41:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:26.472 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:26.472 ... 00:24:26.472 fio-3.35 00:24:26.472 Starting 3 threads 00:24:38.715 00:24:38.715 filename0: (groupid=0, jobs=1): err= 0: pid=98210: Sun Dec 8 18:41:54 2024 00:24:38.716 read: IOPS=265, BW=33.2MiB/s (34.8MB/s)(332MiB/10004msec) 00:24:38.716 slat (nsec): min=6242, max=72698, avg=12978.62, stdev=7821.32 00:24:38.716 clat (usec): min=9119, max=19916, avg=11270.38, stdev=798.01 00:24:38.716 lat (usec): min=9128, max=19937, avg=11283.36, stdev=798.09 00:24:38.716 clat percentiles (usec): 00:24:38.716 | 1.00th=[10814], 5.00th=[10945], 10.00th=[10945], 20.00th=[10945], 00:24:38.716 | 30.00th=[11076], 40.00th=[11076], 50.00th=[11076], 60.00th=[11207], 00:24:38.716 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11469], 95.00th=[11731], 00:24:38.716 | 99.00th=[17171], 99.50th=[17171], 99.90th=[19792], 99.95th=[19792], 00:24:38.716 | 99.99th=[19792] 00:24:38.716 bw ( KiB/s): min=28416, max=34560, per=33.31%, avg=33945.60, stdev=1355.64, samples=20 00:24:38.716 iops : min= 222, max= 270, avg=265.20, stdev=10.59, samples=20 00:24:38.716 lat (msec) : 10=0.11%, 20=99.89% 00:24:38.716 cpu : usr=94.35%, sys=5.17%, ctx=104, majf=0, minf=9 00:24:38.716 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:38.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:38.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:38.716 issued rwts: total=2655,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:38.716 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:38.716 filename0: (groupid=0, jobs=1): err= 0: pid=98211: Sun Dec 8 18:41:54 2024 00:24:38.716 read: IOPS=265, BW=33.2MiB/s (34.8MB/s)(332MiB/10005msec) 00:24:38.716 slat (nsec): min=6166, max=70227, avg=12098.56, stdev=7231.55 00:24:38.716 clat (usec): min=8463, max=22949, avg=11273.85, stdev=836.51 00:24:38.716 lat (usec): min=8470, max=22978, avg=11285.94, stdev=836.39 00:24:38.716 clat percentiles (usec): 00:24:38.716 | 1.00th=[10814], 5.00th=[10945], 10.00th=[10945], 20.00th=[11076], 00:24:38.716 | 30.00th=[11076], 40.00th=[11076], 50.00th=[11076], 60.00th=[11207], 00:24:38.716 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11469], 95.00th=[11731], 00:24:38.716 | 99.00th=[17171], 99.50th=[17171], 99.90th=[22938], 99.95th=[22938], 00:24:38.716 | 99.99th=[22938] 00:24:38.716 bw ( KiB/s): min=27648, max=34560, per=33.31%, avg=33945.60, stdev=1527.89, samples=20 00:24:38.716 iops : min= 216, max= 270, avg=265.20, stdev=11.94, samples=20 00:24:38.716 lat (msec) : 10=0.11%, 20=99.77%, 50=0.11% 00:24:38.716 cpu : usr=95.14%, sys=4.42%, ctx=8, majf=0, minf=0 00:24:38.716 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:38.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:38.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:38.716 issued rwts: total=2655,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:38.716 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:38.716 filename0: (groupid=0, jobs=1): err= 0: pid=98212: Sun Dec 8 18:41:54 2024 00:24:38.716 read: IOPS=265, BW=33.2MiB/s (34.8MB/s)(332MiB/10005msec) 00:24:38.716 slat (nsec): min=6167, max=70489, avg=11099.76, stdev=6377.13 00:24:38.716 clat (usec): min=6767, max=22380, avg=11277.26, stdev=842.30 00:24:38.716 lat (usec): min=6774, max=22401, avg=11288.36, stdev=842.59 00:24:38.716 clat percentiles (usec): 00:24:38.716 | 1.00th=[10814], 5.00th=[10945], 10.00th=[10945], 20.00th=[11076], 00:24:38.716 | 30.00th=[11076], 40.00th=[11076], 50.00th=[11076], 60.00th=[11207], 00:24:38.716 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11600], 95.00th=[11731], 00:24:38.716 | 99.00th=[17171], 99.50th=[17171], 99.90th=[22414], 99.95th=[22414], 00:24:38.716 | 99.99th=[22414] 00:24:38.716 bw ( KiB/s): min=28416, max=34560, per=33.31%, avg=33945.60, stdev=1355.64, samples=20 00:24:38.716 iops : min= 222, max= 270, avg=265.20, stdev=10.59, samples=20 00:24:38.716 lat (msec) : 10=0.11%, 20=99.77%, 50=0.11% 00:24:38.716 cpu : usr=95.40%, sys=3.82%, ctx=90, majf=0, minf=9 00:24:38.716 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:38.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:38.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:38.716 issued rwts: total=2655,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:38.716 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:38.716 00:24:38.716 Run status group 0 (all jobs): 00:24:38.716 READ: bw=99.5MiB/s (104MB/s), 33.2MiB/s-33.2MiB/s (34.8MB/s-34.8MB/s), io=996MiB (1044MB), run=10004-10005msec 00:24:38.716 18:41:54 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:24:38.716 18:41:54 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:24:38.716 18:41:54 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:24:38.716 18:41:54 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:38.716 18:41:54 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:24:38.716 18:41:54 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:38.716 18:41:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.716 18:41:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:38.716 18:41:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.716 18:41:54 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:38.716 18:41:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.716 18:41:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:38.716 ************************************ 00:24:38.716 END TEST fio_dif_digest 00:24:38.716 ************************************ 00:24:38.716 18:41:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.716 00:24:38.716 real 0m11.006s 00:24:38.716 user 0m29.126s 00:24:38.716 sys 0m1.646s 00:24:38.716 18:41:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:38.716 18:41:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:38.716 18:41:54 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:38.716 18:41:54 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:24:38.716 18:41:54 nvmf_dif -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:38.716 18:41:54 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:24:38.716 18:41:55 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:38.716 18:41:55 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:24:38.716 18:41:55 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:38.716 18:41:55 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:38.716 rmmod nvme_tcp 00:24:38.716 rmmod nvme_fabrics 00:24:38.716 rmmod nvme_keyring 00:24:38.716 18:41:55 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:38.716 18:41:55 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:24:38.716 18:41:55 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:24:38.716 18:41:55 nvmf_dif -- nvmf/common.sh@513 -- # '[' -n 97471 ']' 00:24:38.716 18:41:55 nvmf_dif -- nvmf/common.sh@514 -- # killprocess 97471 00:24:38.716 18:41:55 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 97471 ']' 00:24:38.716 18:41:55 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 97471 00:24:38.716 18:41:55 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:24:38.716 18:41:55 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:38.716 18:41:55 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97471 00:24:38.716 killing process with pid 97471 00:24:38.716 18:41:55 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:38.716 18:41:55 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:38.716 18:41:55 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97471' 00:24:38.716 18:41:55 nvmf_dif -- common/autotest_common.sh@969 -- # kill 97471 00:24:38.716 18:41:55 nvmf_dif -- common/autotest_common.sh@974 -- # wait 97471 00:24:38.716 18:41:55 nvmf_dif -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:24:38.716 18:41:55 nvmf_dif -- nvmf/common.sh@517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:38.716 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:38.716 Waiting for block devices as requested 00:24:38.716 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:38.716 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:38.716 18:41:55 nvmf_dif -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:38.716 18:41:55 nvmf_dif -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:38.716 18:41:55 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:24:38.716 18:41:55 nvmf_dif -- nvmf/common.sh@787 -- # iptables-save 00:24:38.716 18:41:55 nvmf_dif -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:38.716 18:41:55 nvmf_dif -- nvmf/common.sh@787 -- # iptables-restore 00:24:38.716 18:41:55 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:38.716 18:41:55 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:38.716 18:41:55 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:38.716 18:41:55 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:38.716 18:41:55 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:38.716 18:41:56 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:38.716 18:41:56 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:38.716 18:41:56 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:38.716 18:41:56 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:38.716 18:41:56 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:38.716 18:41:56 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:38.716 18:41:56 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:38.716 18:41:56 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:38.716 18:41:56 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:38.716 18:41:56 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:38.716 18:41:56 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:38.716 18:41:56 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.716 18:41:56 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:38.716 18:41:56 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.716 18:41:56 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:24:38.716 00:24:38.716 real 0m59.377s 00:24:38.716 user 3m49.461s 00:24:38.716 sys 0m17.305s 00:24:38.716 18:41:56 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:38.716 18:41:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:38.716 ************************************ 00:24:38.717 END TEST nvmf_dif 00:24:38.717 ************************************ 00:24:38.717 18:41:56 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:38.717 18:41:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:38.717 18:41:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:38.717 18:41:56 -- common/autotest_common.sh@10 -- # set +x 00:24:38.717 ************************************ 00:24:38.717 START TEST nvmf_abort_qd_sizes 00:24:38.717 ************************************ 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:38.717 * Looking for test storage... 00:24:38.717 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:38.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.717 --rc genhtml_branch_coverage=1 00:24:38.717 --rc genhtml_function_coverage=1 00:24:38.717 --rc genhtml_legend=1 00:24:38.717 --rc geninfo_all_blocks=1 00:24:38.717 --rc geninfo_unexecuted_blocks=1 00:24:38.717 00:24:38.717 ' 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:38.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.717 --rc genhtml_branch_coverage=1 00:24:38.717 --rc genhtml_function_coverage=1 00:24:38.717 --rc genhtml_legend=1 00:24:38.717 --rc geninfo_all_blocks=1 00:24:38.717 --rc geninfo_unexecuted_blocks=1 00:24:38.717 00:24:38.717 ' 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:38.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.717 --rc genhtml_branch_coverage=1 00:24:38.717 --rc genhtml_function_coverage=1 00:24:38.717 --rc genhtml_legend=1 00:24:38.717 --rc geninfo_all_blocks=1 00:24:38.717 --rc geninfo_unexecuted_blocks=1 00:24:38.717 00:24:38.717 ' 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:38.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.717 --rc genhtml_branch_coverage=1 00:24:38.717 --rc genhtml_function_coverage=1 00:24:38.717 --rc genhtml_legend=1 00:24:38.717 --rc geninfo_all_blocks=1 00:24:38.717 --rc geninfo_unexecuted_blocks=1 00:24:38.717 00:24:38.717 ' 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:38.717 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@456 -- # nvmf_veth_init 00:24:38.717 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:38.718 Cannot find device "nvmf_init_br" 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:38.718 Cannot find device "nvmf_init_br2" 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:38.718 Cannot find device "nvmf_tgt_br" 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:38.718 Cannot find device "nvmf_tgt_br2" 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:38.718 Cannot find device "nvmf_init_br" 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:38.718 Cannot find device "nvmf_init_br2" 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:38.718 Cannot find device "nvmf_tgt_br" 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:38.718 Cannot find device "nvmf_tgt_br2" 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:38.718 Cannot find device "nvmf_br" 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:38.718 Cannot find device "nvmf_init_if" 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:38.718 Cannot find device "nvmf_init_if2" 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:38.718 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:24:38.718 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:38.718 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:38.978 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:38.978 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:24:38.978 00:24:38.978 --- 10.0.0.3 ping statistics --- 00:24:38.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.978 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:38.978 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:38.978 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:24:38.978 00:24:38.978 --- 10.0.0.4 ping statistics --- 00:24:38.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.978 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:24:38.978 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:38.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:24:38.978 00:24:38.978 --- 10.0.0.1 ping statistics --- 00:24:38.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.978 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:24:38.979 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:38.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:24:38.979 00:24:38.979 --- 10.0.0.2 ping statistics --- 00:24:38.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.979 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:24:38.979 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.979 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@457 -- # return 0 00:24:38.979 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:24:38.979 18:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@475 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:39.918 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:39.918 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:39.918 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:39.918 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.918 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:39.918 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:39.918 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.918 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:39.918 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:39.918 18:41:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:24:39.918 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:39.918 18:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:39.918 18:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:39.918 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # nvmfpid=98857 00:24:39.918 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:24:39.918 18:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # waitforlisten 98857 00:24:39.918 18:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 98857 ']' 00:24:39.918 18:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.918 18:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:39.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.918 18:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.918 18:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:39.918 18:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:40.178 [2024-12-08 18:41:57.879141] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:40.178 [2024-12-08 18:41:57.879228] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.178 [2024-12-08 18:41:58.019571] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:40.438 [2024-12-08 18:41:58.109627] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.438 [2024-12-08 18:41:58.109708] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.438 [2024-12-08 18:41:58.109724] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:40.438 [2024-12-08 18:41:58.109735] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:40.438 [2024-12-08 18:41:58.109745] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.438 [2024-12-08 18:41:58.109925] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.438 [2024-12-08 18:41:58.110346] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:40.438 [2024-12-08 18:41:58.110542] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:40.438 [2024-12-08 18:41:58.110563] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.438 [2024-12-08 18:41:58.193785] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:40.438 18:41:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:40.708 ************************************ 00:24:40.708 START TEST spdk_target_abort 00:24:40.708 ************************************ 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:40.708 spdk_targetn1 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:40.708 [2024-12-08 18:41:58.440883] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:40.708 [2024-12-08 18:41:58.469055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:40.708 18:41:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:43.998 Initializing NVMe Controllers 00:24:43.998 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:43.998 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:43.998 Initialization complete. Launching workers. 00:24:43.998 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9127, failed: 0 00:24:43.998 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1027, failed to submit 8100 00:24:43.998 success 797, unsuccessful 230, failed 0 00:24:43.998 18:42:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:43.998 18:42:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:47.290 Initializing NVMe Controllers 00:24:47.290 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:47.290 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:47.290 Initialization complete. Launching workers. 00:24:47.290 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8967, failed: 0 00:24:47.290 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1206, failed to submit 7761 00:24:47.290 success 376, unsuccessful 830, failed 0 00:24:47.290 18:42:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:47.290 18:42:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:50.578 Initializing NVMe Controllers 00:24:50.578 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:50.578 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:50.578 Initialization complete. Launching workers. 00:24:50.578 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32829, failed: 0 00:24:50.578 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2474, failed to submit 30355 00:24:50.578 success 550, unsuccessful 1924, failed 0 00:24:50.578 18:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:24:50.578 18:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.578 18:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:50.578 18:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.578 18:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:24:50.578 18:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.578 18:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:50.837 18:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.837 18:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 98857 00:24:50.837 18:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 98857 ']' 00:24:50.837 18:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 98857 00:24:50.837 18:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:24:50.837 18:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:50.837 18:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98857 00:24:50.837 18:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:50.837 killing process with pid 98857 00:24:50.837 18:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:50.837 18:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98857' 00:24:50.837 18:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 98857 00:24:50.837 18:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 98857 00:24:51.096 00:24:51.096 real 0m10.484s 00:24:51.096 user 0m40.446s 00:24:51.096 sys 0m2.009s 00:24:51.096 18:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:51.096 18:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:51.096 ************************************ 00:24:51.096 END TEST spdk_target_abort 00:24:51.096 ************************************ 00:24:51.096 18:42:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:24:51.096 18:42:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:51.096 18:42:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:51.096 18:42:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:51.096 ************************************ 00:24:51.096 START TEST kernel_target_abort 00:24:51.096 ************************************ 00:24:51.096 18:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:24:51.096 18:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:24:51.096 18:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@765 -- # local ip 00:24:51.096 18:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:51.096 18:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:51.096 18:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.096 18:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.096 18:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:51.096 18:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.096 18:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:51.096 18:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:51.096 18:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:51.096 18:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:51.096 18:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:51.096 18:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:24:51.096 18:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:51.096 18:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:51.096 18:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:51.096 18:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # local block nvme 00:24:51.096 18:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:24:51.096 18:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@666 -- # modprobe nvmet 00:24:51.096 18:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:51.096 18:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:51.355 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:51.613 Waiting for block devices as requested 00:24:51.613 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:51.613 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:51.613 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:51.613 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:51.613 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:24:51.613 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:51.613 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:51.613 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:51.613 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:24:51.613 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:51.613 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:51.871 No valid GPT data, bailing 00:24:51.871 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:51.871 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:51.871 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:51.871 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:24:51.871 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:51.871 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:51.871 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:24:51.871 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:24:51.871 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:51.871 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:51.871 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:24:51.871 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:24:51.871 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:51.871 No valid GPT data, bailing 00:24:51.871 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:51.871 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:51.872 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:51.872 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:24:51.872 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:51.872 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:51.872 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:24:51.872 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:24:51.872 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:51.872 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:51.872 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:24:51.872 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:24:51.872 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:51.872 No valid GPT data, bailing 00:24:51.872 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:51.872 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:51.872 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:51.872 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:24:51.872 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:51.872 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:51.872 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:24:51.872 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:24:51.872 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:51.872 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:51.872 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:24:51.872 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:24:51.872 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:51.872 No valid GPT data, bailing 00:24:52.131 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:52.131 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:52.131 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:52.131 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:24:52.131 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:24:52.131 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:52.131 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:52.131 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:52.131 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:52.131 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo 1 00:24:52.131 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:24:52.131 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:24:52.131 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:24:52.131 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo tcp 00:24:52.131 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 4420 00:24:52.131 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo ipv4 00:24:52.131 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:52.131 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c --hostid=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c -a 10.0.0.1 -t tcp -s 4420 00:24:52.131 00:24:52.131 Discovery Log Number of Records 2, Generation counter 2 00:24:52.131 =====Discovery Log Entry 0====== 00:24:52.131 trtype: tcp 00:24:52.131 adrfam: ipv4 00:24:52.131 subtype: current discovery subsystem 00:24:52.131 treq: not specified, sq flow control disable supported 00:24:52.131 portid: 1 00:24:52.131 trsvcid: 4420 00:24:52.131 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:52.131 traddr: 10.0.0.1 00:24:52.131 eflags: none 00:24:52.131 sectype: none 00:24:52.131 =====Discovery Log Entry 1====== 00:24:52.131 trtype: tcp 00:24:52.131 adrfam: ipv4 00:24:52.131 subtype: nvme subsystem 00:24:52.131 treq: not specified, sq flow control disable supported 00:24:52.131 portid: 1 00:24:52.131 trsvcid: 4420 00:24:52.131 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:52.131 traddr: 10.0.0.1 00:24:52.131 eflags: none 00:24:52.131 sectype: none 00:24:52.131 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:24:52.132 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:52.132 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:52.132 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:24:52.132 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:52.132 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:52.132 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:52.132 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:52.132 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:52.132 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:52.132 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:52.132 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:52.132 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:52.132 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:52.132 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:24:52.132 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:52.132 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:24:52.132 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:52.132 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:52.132 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:52.132 18:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:55.420 Initializing NVMe Controllers 00:24:55.420 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:55.420 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:55.420 Initialization complete. Launching workers. 00:24:55.420 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30713, failed: 0 00:24:55.420 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30713, failed to submit 0 00:24:55.420 success 0, unsuccessful 30713, failed 0 00:24:55.420 18:42:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:55.420 18:42:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:58.709 Initializing NVMe Controllers 00:24:58.709 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:58.709 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:58.709 Initialization complete. Launching workers. 00:24:58.709 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66152, failed: 0 00:24:58.709 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26774, failed to submit 39378 00:24:58.709 success 0, unsuccessful 26774, failed 0 00:24:58.709 18:42:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:58.709 18:42:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:02.018 Initializing NVMe Controllers 00:25:02.018 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:02.018 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:02.018 Initialization complete. Launching workers. 00:25:02.018 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 72535, failed: 0 00:25:02.018 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18120, failed to submit 54415 00:25:02.018 success 0, unsuccessful 18120, failed 0 00:25:02.018 18:42:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:25:02.018 18:42:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:02.018 18:42:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # echo 0 00:25:02.018 18:42:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:02.018 18:42:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:02.018 18:42:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:02.018 18:42:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:02.018 18:42:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:25:02.018 18:42:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:25:02.018 18:42:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:02.320 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:03.284 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:03.284 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:03.284 00:25:03.284 real 0m12.080s 00:25:03.284 user 0m5.636s 00:25:03.284 sys 0m3.772s 00:25:03.284 18:42:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:03.284 18:42:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:03.284 ************************************ 00:25:03.284 END TEST kernel_target_abort 00:25:03.284 ************************************ 00:25:03.284 18:42:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:03.284 18:42:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:25:03.284 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:03.284 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:25:03.284 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:03.284 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:25:03.284 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:03.284 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:03.284 rmmod nvme_tcp 00:25:03.284 rmmod nvme_fabrics 00:25:03.284 rmmod nvme_keyring 00:25:03.284 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:03.284 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:25:03.284 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:25:03.284 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@513 -- # '[' -n 98857 ']' 00:25:03.284 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # killprocess 98857 00:25:03.284 18:42:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 98857 ']' 00:25:03.284 18:42:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 98857 00:25:03.284 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (98857) - No such process 00:25:03.284 Process with pid 98857 is not found 00:25:03.284 18:42:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 98857 is not found' 00:25:03.284 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:25:03.284 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:03.852 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:03.852 Waiting for block devices as requested 00:25:03.852 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:03.852 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:03.852 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:03.852 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:03.852 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:25:03.852 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-save 00:25:03.852 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:03.852 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-restore 00:25:03.852 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:03.852 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:03.852 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:03.852 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:04.112 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:04.112 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:04.112 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:04.112 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:04.112 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:04.112 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:04.112 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:04.112 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:04.112 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:04.112 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:04.112 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:04.112 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:04.112 18:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.112 18:42:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:04.112 18:42:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.112 18:42:22 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:25:04.112 00:25:04.112 real 0m25.754s 00:25:04.112 user 0m47.292s 00:25:04.112 sys 0m7.270s 00:25:04.112 18:42:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:04.112 ************************************ 00:25:04.112 END TEST nvmf_abort_qd_sizes 00:25:04.112 ************************************ 00:25:04.112 18:42:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:04.371 18:42:22 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:25:04.371 18:42:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:04.371 18:42:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:04.371 18:42:22 -- common/autotest_common.sh@10 -- # set +x 00:25:04.371 ************************************ 00:25:04.371 START TEST keyring_file 00:25:04.371 ************************************ 00:25:04.371 18:42:22 keyring_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:25:04.371 * Looking for test storage... 00:25:04.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:25:04.371 18:42:22 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:04.371 18:42:22 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:25:04.371 18:42:22 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:04.371 18:42:22 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:04.371 18:42:22 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:04.371 18:42:22 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:04.371 18:42:22 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:04.371 18:42:22 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:25:04.371 18:42:22 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:25:04.371 18:42:22 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:25:04.371 18:42:22 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:25:04.371 18:42:22 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:25:04.371 18:42:22 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:25:04.371 18:42:22 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:25:04.371 18:42:22 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:04.371 18:42:22 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:25:04.371 18:42:22 keyring_file -- scripts/common.sh@345 -- # : 1 00:25:04.371 18:42:22 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:04.371 18:42:22 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:04.371 18:42:22 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:25:04.372 18:42:22 keyring_file -- scripts/common.sh@353 -- # local d=1 00:25:04.372 18:42:22 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:04.372 18:42:22 keyring_file -- scripts/common.sh@355 -- # echo 1 00:25:04.372 18:42:22 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:25:04.372 18:42:22 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:25:04.372 18:42:22 keyring_file -- scripts/common.sh@353 -- # local d=2 00:25:04.372 18:42:22 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:04.372 18:42:22 keyring_file -- scripts/common.sh@355 -- # echo 2 00:25:04.372 18:42:22 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:25:04.372 18:42:22 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:04.372 18:42:22 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:04.372 18:42:22 keyring_file -- scripts/common.sh@368 -- # return 0 00:25:04.372 18:42:22 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:04.372 18:42:22 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:04.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.372 --rc genhtml_branch_coverage=1 00:25:04.372 --rc genhtml_function_coverage=1 00:25:04.372 --rc genhtml_legend=1 00:25:04.372 --rc geninfo_all_blocks=1 00:25:04.372 --rc geninfo_unexecuted_blocks=1 00:25:04.372 00:25:04.372 ' 00:25:04.372 18:42:22 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:04.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.372 --rc genhtml_branch_coverage=1 00:25:04.372 --rc genhtml_function_coverage=1 00:25:04.372 --rc genhtml_legend=1 00:25:04.372 --rc geninfo_all_blocks=1 00:25:04.372 --rc geninfo_unexecuted_blocks=1 00:25:04.372 00:25:04.372 ' 00:25:04.372 18:42:22 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:04.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.372 --rc genhtml_branch_coverage=1 00:25:04.372 --rc genhtml_function_coverage=1 00:25:04.372 --rc genhtml_legend=1 00:25:04.372 --rc geninfo_all_blocks=1 00:25:04.372 --rc geninfo_unexecuted_blocks=1 00:25:04.372 00:25:04.372 ' 00:25:04.372 18:42:22 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:04.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.372 --rc genhtml_branch_coverage=1 00:25:04.372 --rc genhtml_function_coverage=1 00:25:04.372 --rc genhtml_legend=1 00:25:04.372 --rc geninfo_all_blocks=1 00:25:04.372 --rc geninfo_unexecuted_blocks=1 00:25:04.372 00:25:04.372 ' 00:25:04.372 18:42:22 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:25:04.372 18:42:22 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:04.372 18:42:22 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:25:04.372 18:42:22 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:04.372 18:42:22 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:04.372 18:42:22 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:04.372 18:42:22 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:04.372 18:42:22 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:04.372 18:42:22 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:04.372 18:42:22 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:04.372 18:42:22 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:04.372 18:42:22 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:04.372 18:42:22 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:04.372 18:42:22 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:25:04.372 18:42:22 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:25:04.372 18:42:22 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:04.372 18:42:22 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:04.372 18:42:22 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:04.372 18:42:22 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:04.372 18:42:22 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:04.372 18:42:22 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:25:04.372 18:42:22 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:04.372 18:42:22 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:04.372 18:42:22 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:04.372 18:42:22 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.372 18:42:22 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.372 18:42:22 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.372 18:42:22 keyring_file -- paths/export.sh@5 -- # export PATH 00:25:04.372 18:42:22 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.372 18:42:22 keyring_file -- nvmf/common.sh@51 -- # : 0 00:25:04.372 18:42:22 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:04.372 18:42:22 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:04.372 18:42:22 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:04.372 18:42:22 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:04.372 18:42:22 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:04.372 18:42:22 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:04.372 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:04.372 18:42:22 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:04.372 18:42:22 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:04.372 18:42:22 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:04.372 18:42:22 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:25:04.372 18:42:22 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:25:04.372 18:42:22 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:25:04.372 18:42:22 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:25:04.630 18:42:22 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:25:04.630 18:42:22 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:25:04.630 18:42:22 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:04.630 18:42:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:04.630 18:42:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:25:04.630 18:42:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:04.630 18:42:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:04.630 18:42:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:04.630 18:42:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.YDHHY7T65X 00:25:04.630 18:42:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:04.630 18:42:22 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:04.630 18:42:22 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:25:04.630 18:42:22 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:04.630 18:42:22 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:25:04.630 18:42:22 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:25:04.630 18:42:22 keyring_file -- nvmf/common.sh@729 -- # python - 00:25:04.630 18:42:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.YDHHY7T65X 00:25:04.630 18:42:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.YDHHY7T65X 00:25:04.630 18:42:22 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.YDHHY7T65X 00:25:04.630 18:42:22 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:25:04.630 18:42:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:04.630 18:42:22 keyring_file -- keyring/common.sh@17 -- # name=key1 00:25:04.630 18:42:22 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:25:04.630 18:42:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:04.630 18:42:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:04.630 18:42:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.YhN8b8E4MZ 00:25:04.630 18:42:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:25:04.630 18:42:22 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:25:04.630 18:42:22 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:25:04.630 18:42:22 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:04.630 18:42:22 keyring_file -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:25:04.630 18:42:22 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:25:04.630 18:42:22 keyring_file -- nvmf/common.sh@729 -- # python - 00:25:04.630 18:42:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.YhN8b8E4MZ 00:25:04.630 18:42:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.YhN8b8E4MZ 00:25:04.630 18:42:22 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.YhN8b8E4MZ 00:25:04.630 18:42:22 keyring_file -- keyring/file.sh@30 -- # tgtpid=99755 00:25:04.630 18:42:22 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:04.630 18:42:22 keyring_file -- keyring/file.sh@32 -- # waitforlisten 99755 00:25:04.630 18:42:22 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 99755 ']' 00:25:04.630 18:42:22 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:04.630 18:42:22 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:04.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:04.630 18:42:22 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:04.630 18:42:22 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:04.630 18:42:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:04.630 [2024-12-08 18:42:22.488554] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:04.630 [2024-12-08 18:42:22.488665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99755 ] 00:25:04.888 [2024-12-08 18:42:22.628319] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.888 [2024-12-08 18:42:22.705357] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.888 [2024-12-08 18:42:22.781984] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:05.146 18:42:22 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:05.146 18:42:22 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:25:05.146 18:42:22 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:25:05.146 18:42:22 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.146 18:42:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:05.146 [2024-12-08 18:42:23.000213] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:05.146 null0 00:25:05.146 [2024-12-08 18:42:23.032177] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:05.146 [2024-12-08 18:42:23.032398] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:05.146 18:42:23 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.146 18:42:23 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:05.146 18:42:23 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:25:05.146 18:42:23 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:05.146 18:42:23 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:05.146 18:42:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:05.146 18:42:23 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:05.146 18:42:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:05.146 18:42:23 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:05.146 18:42:23 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.146 18:42:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:05.146 [2024-12-08 18:42:23.064168] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:25:05.146 request: 00:25:05.146 { 00:25:05.146 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:25:05.147 "secure_channel": false, 00:25:05.147 "listen_address": { 00:25:05.147 "trtype": "tcp", 00:25:05.147 "traddr": "127.0.0.1", 00:25:05.147 "trsvcid": "4420" 00:25:05.147 }, 00:25:05.147 "method": "nvmf_subsystem_add_listener", 00:25:05.147 "req_id": 1 00:25:05.147 } 00:25:05.147 Got JSON-RPC error response 00:25:05.147 response: 00:25:05.147 { 00:25:05.147 "code": -32602, 00:25:05.147 "message": "Invalid parameters" 00:25:05.147 } 00:25:05.147 18:42:23 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:05.147 18:42:23 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:25:05.147 18:42:23 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:05.147 18:42:23 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:05.147 18:42:23 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:05.405 18:42:23 keyring_file -- keyring/file.sh@47 -- # bperfpid=99766 00:25:05.405 18:42:23 keyring_file -- keyring/file.sh@49 -- # waitforlisten 99766 /var/tmp/bperf.sock 00:25:05.405 18:42:23 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:25:05.405 18:42:23 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 99766 ']' 00:25:05.405 18:42:23 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:05.405 18:42:23 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:05.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:05.405 18:42:23 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:05.405 18:42:23 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:05.405 18:42:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:05.405 [2024-12-08 18:42:23.131850] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:05.405 [2024-12-08 18:42:23.131968] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99766 ] 00:25:05.405 [2024-12-08 18:42:23.268527] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.663 [2024-12-08 18:42:23.335866] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.663 [2024-12-08 18:42:23.393108] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:05.663 18:42:23 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:05.663 18:42:23 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:25:05.663 18:42:23 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YDHHY7T65X 00:25:05.663 18:42:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YDHHY7T65X 00:25:05.921 18:42:23 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.YhN8b8E4MZ 00:25:05.921 18:42:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.YhN8b8E4MZ 00:25:06.180 18:42:24 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:25:06.180 18:42:24 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:25:06.180 18:42:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:06.180 18:42:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:06.180 18:42:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:06.437 18:42:24 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.YDHHY7T65X == \/\t\m\p\/\t\m\p\.\Y\D\H\H\Y\7\T\6\5\X ]] 00:25:06.437 18:42:24 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:25:06.437 18:42:24 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:25:06.437 18:42:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:06.437 18:42:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:06.437 18:42:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:06.695 18:42:24 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.YhN8b8E4MZ == \/\t\m\p\/\t\m\p\.\Y\h\N\8\b\8\E\4\M\Z ]] 00:25:06.954 18:42:24 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:25:06.954 18:42:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:06.954 18:42:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:06.954 18:42:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:06.954 18:42:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:06.954 18:42:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:06.954 18:42:24 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:25:06.954 18:42:24 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:25:06.954 18:42:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:06.954 18:42:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:06.954 18:42:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:06.954 18:42:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:06.954 18:42:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:07.211 18:42:25 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:25:07.211 18:42:25 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:07.211 18:42:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:07.469 [2024-12-08 18:42:25.309764] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:07.469 nvme0n1 00:25:07.469 18:42:25 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:25:07.469 18:42:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:07.469 18:42:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:07.469 18:42:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:07.469 18:42:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:07.469 18:42:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:08.036 18:42:25 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:25:08.036 18:42:25 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:25:08.036 18:42:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:08.036 18:42:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:08.036 18:42:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:08.036 18:42:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:08.036 18:42:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:08.036 18:42:25 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:25:08.036 18:42:25 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:08.295 Running I/O for 1 seconds... 00:25:09.232 13606.00 IOPS, 53.15 MiB/s 00:25:09.232 Latency(us) 00:25:09.232 [2024-12-08T18:42:27.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.232 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:25:09.232 nvme0n1 : 1.01 13652.68 53.33 0.00 0.00 9351.77 3693.85 16205.27 00:25:09.232 [2024-12-08T18:42:27.162Z] =================================================================================================================== 00:25:09.232 [2024-12-08T18:42:27.162Z] Total : 13652.68 53.33 0.00 0.00 9351.77 3693.85 16205.27 00:25:09.232 { 00:25:09.232 "results": [ 00:25:09.232 { 00:25:09.232 "job": "nvme0n1", 00:25:09.232 "core_mask": "0x2", 00:25:09.232 "workload": "randrw", 00:25:09.232 "percentage": 50, 00:25:09.232 "status": "finished", 00:25:09.232 "queue_depth": 128, 00:25:09.232 "io_size": 4096, 00:25:09.232 "runtime": 1.006103, 00:25:09.232 "iops": 13652.67770794839, 00:25:09.232 "mibps": 53.3307722966734, 00:25:09.232 "io_failed": 0, 00:25:09.232 "io_timeout": 0, 00:25:09.232 "avg_latency_us": 9351.767967384973, 00:25:09.232 "min_latency_us": 3693.847272727273, 00:25:09.232 "max_latency_us": 16205.265454545455 00:25:09.232 } 00:25:09.232 ], 00:25:09.232 "core_count": 1 00:25:09.232 } 00:25:09.232 18:42:27 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:09.232 18:42:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:09.491 18:42:27 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:25:09.491 18:42:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:09.491 18:42:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:09.491 18:42:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:09.491 18:42:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:09.491 18:42:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:09.750 18:42:27 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:25:09.750 18:42:27 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:25:09.750 18:42:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:09.750 18:42:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:09.750 18:42:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:09.750 18:42:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:09.750 18:42:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:10.009 18:42:27 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:25:10.009 18:42:27 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:10.009 18:42:27 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:25:10.009 18:42:27 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:10.009 18:42:27 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:25:10.009 18:42:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:10.009 18:42:27 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:25:10.009 18:42:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:10.009 18:42:27 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:10.009 18:42:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:10.269 [2024-12-08 18:42:28.106558] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:10.269 [2024-12-08 18:42:28.106763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06320 (107): Transport endpoint is not connected 00:25:10.269 [2024-12-08 18:42:28.107750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06320 (9): Bad file descriptor 00:25:10.269 [2024-12-08 18:42:28.108747] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:10.269 [2024-12-08 18:42:28.108769] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:10.269 [2024-12-08 18:42:28.108779] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:25:10.269 [2024-12-08 18:42:28.108805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:10.269 request: 00:25:10.269 { 00:25:10.269 "name": "nvme0", 00:25:10.269 "trtype": "tcp", 00:25:10.269 "traddr": "127.0.0.1", 00:25:10.269 "adrfam": "ipv4", 00:25:10.269 "trsvcid": "4420", 00:25:10.269 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:10.269 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:10.269 "prchk_reftag": false, 00:25:10.269 "prchk_guard": false, 00:25:10.269 "hdgst": false, 00:25:10.269 "ddgst": false, 00:25:10.269 "psk": "key1", 00:25:10.269 "allow_unrecognized_csi": false, 00:25:10.269 "method": "bdev_nvme_attach_controller", 00:25:10.269 "req_id": 1 00:25:10.269 } 00:25:10.269 Got JSON-RPC error response 00:25:10.269 response: 00:25:10.269 { 00:25:10.269 "code": -5, 00:25:10.269 "message": "Input/output error" 00:25:10.269 } 00:25:10.269 18:42:28 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:25:10.269 18:42:28 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:10.269 18:42:28 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:10.269 18:42:28 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:10.269 18:42:28 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:25:10.269 18:42:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:10.269 18:42:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:10.269 18:42:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:10.269 18:42:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:10.269 18:42:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:10.528 18:42:28 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:25:10.528 18:42:28 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:25:10.528 18:42:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:10.528 18:42:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:10.528 18:42:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:10.528 18:42:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:10.528 18:42:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:10.787 18:42:28 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:25:10.787 18:42:28 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:25:10.787 18:42:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:11.046 18:42:28 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:25:11.046 18:42:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:25:11.305 18:42:29 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:25:11.306 18:42:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:11.306 18:42:29 keyring_file -- keyring/file.sh@78 -- # jq length 00:25:11.565 18:42:29 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:25:11.565 18:42:29 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.YDHHY7T65X 00:25:11.565 18:42:29 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.YDHHY7T65X 00:25:11.565 18:42:29 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:25:11.565 18:42:29 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.YDHHY7T65X 00:25:11.565 18:42:29 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:25:11.565 18:42:29 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:11.565 18:42:29 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:25:11.565 18:42:29 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:11.565 18:42:29 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YDHHY7T65X 00:25:11.565 18:42:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YDHHY7T65X 00:25:11.824 [2024-12-08 18:42:29.595192] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.YDHHY7T65X': 0100660 00:25:11.824 [2024-12-08 18:42:29.595230] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:11.824 request: 00:25:11.824 { 00:25:11.824 "name": "key0", 00:25:11.824 "path": "/tmp/tmp.YDHHY7T65X", 00:25:11.824 "method": "keyring_file_add_key", 00:25:11.824 "req_id": 1 00:25:11.824 } 00:25:11.824 Got JSON-RPC error response 00:25:11.824 response: 00:25:11.824 { 00:25:11.824 "code": -1, 00:25:11.824 "message": "Operation not permitted" 00:25:11.824 } 00:25:11.824 18:42:29 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:25:11.824 18:42:29 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:11.824 18:42:29 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:11.824 18:42:29 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:11.824 18:42:29 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.YDHHY7T65X 00:25:11.824 18:42:29 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YDHHY7T65X 00:25:11.824 18:42:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YDHHY7T65X 00:25:12.083 18:42:29 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.YDHHY7T65X 00:25:12.083 18:42:29 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:25:12.083 18:42:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:12.083 18:42:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:12.083 18:42:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:12.083 18:42:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:12.083 18:42:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:12.343 18:42:30 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:25:12.343 18:42:30 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:12.343 18:42:30 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:25:12.343 18:42:30 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:12.343 18:42:30 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:25:12.343 18:42:30 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:12.343 18:42:30 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:25:12.343 18:42:30 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:12.343 18:42:30 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:12.343 18:42:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:12.602 [2024-12-08 18:42:30.415359] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.YDHHY7T65X': No such file or directory 00:25:12.602 [2024-12-08 18:42:30.415394] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:25:12.602 [2024-12-08 18:42:30.415454] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:25:12.602 [2024-12-08 18:42:30.415463] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:25:12.602 [2024-12-08 18:42:30.415473] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:12.602 [2024-12-08 18:42:30.415481] bdev_nvme.c:6447:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:25:12.602 request: 00:25:12.602 { 00:25:12.602 "name": "nvme0", 00:25:12.602 "trtype": "tcp", 00:25:12.602 "traddr": "127.0.0.1", 00:25:12.602 "adrfam": "ipv4", 00:25:12.602 "trsvcid": "4420", 00:25:12.602 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:12.602 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:12.602 "prchk_reftag": false, 00:25:12.602 "prchk_guard": false, 00:25:12.602 "hdgst": false, 00:25:12.602 "ddgst": false, 00:25:12.602 "psk": "key0", 00:25:12.602 "allow_unrecognized_csi": false, 00:25:12.602 "method": "bdev_nvme_attach_controller", 00:25:12.602 "req_id": 1 00:25:12.602 } 00:25:12.602 Got JSON-RPC error response 00:25:12.602 response: 00:25:12.602 { 00:25:12.602 "code": -19, 00:25:12.602 "message": "No such device" 00:25:12.602 } 00:25:12.602 18:42:30 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:25:12.602 18:42:30 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:12.602 18:42:30 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:12.602 18:42:30 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:12.602 18:42:30 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:25:12.602 18:42:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:12.862 18:42:30 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:12.862 18:42:30 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:12.862 18:42:30 keyring_file -- keyring/common.sh@17 -- # name=key0 00:25:12.862 18:42:30 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:12.862 18:42:30 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:12.862 18:42:30 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:12.862 18:42:30 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.XaPwtvrQou 00:25:12.862 18:42:30 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:12.862 18:42:30 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:12.862 18:42:30 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:25:12.862 18:42:30 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:12.862 18:42:30 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:25:12.862 18:42:30 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:25:12.862 18:42:30 keyring_file -- nvmf/common.sh@729 -- # python - 00:25:12.862 18:42:30 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.XaPwtvrQou 00:25:12.862 18:42:30 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.XaPwtvrQou 00:25:12.862 18:42:30 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.XaPwtvrQou 00:25:12.862 18:42:30 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XaPwtvrQou 00:25:12.862 18:42:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XaPwtvrQou 00:25:13.121 18:42:30 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:13.121 18:42:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:13.690 nvme0n1 00:25:13.690 18:42:31 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:25:13.690 18:42:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:13.690 18:42:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:13.690 18:42:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:13.690 18:42:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:13.690 18:42:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:13.690 18:42:31 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:25:13.690 18:42:31 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:25:13.690 18:42:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:13.950 18:42:31 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:25:13.950 18:42:31 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:25:13.950 18:42:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:13.950 18:42:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:13.950 18:42:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:14.209 18:42:32 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:25:14.209 18:42:32 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:25:14.209 18:42:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:14.209 18:42:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:14.209 18:42:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:14.209 18:42:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:14.209 18:42:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:14.469 18:42:32 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:25:14.469 18:42:32 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:14.469 18:42:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:14.728 18:42:32 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:25:14.728 18:42:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:14.728 18:42:32 keyring_file -- keyring/file.sh@105 -- # jq length 00:25:14.988 18:42:32 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:25:14.988 18:42:32 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XaPwtvrQou 00:25:14.988 18:42:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XaPwtvrQou 00:25:15.247 18:42:33 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.YhN8b8E4MZ 00:25:15.247 18:42:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.YhN8b8E4MZ 00:25:15.507 18:42:33 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:15.507 18:42:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:15.767 nvme0n1 00:25:15.767 18:42:33 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:25:15.767 18:42:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:25:16.026 18:42:33 keyring_file -- keyring/file.sh@113 -- # config='{ 00:25:16.026 "subsystems": [ 00:25:16.026 { 00:25:16.026 "subsystem": "keyring", 00:25:16.026 "config": [ 00:25:16.026 { 00:25:16.026 "method": "keyring_file_add_key", 00:25:16.026 "params": { 00:25:16.026 "name": "key0", 00:25:16.026 "path": "/tmp/tmp.XaPwtvrQou" 00:25:16.026 } 00:25:16.026 }, 00:25:16.026 { 00:25:16.026 "method": "keyring_file_add_key", 00:25:16.026 "params": { 00:25:16.026 "name": "key1", 00:25:16.026 "path": "/tmp/tmp.YhN8b8E4MZ" 00:25:16.026 } 00:25:16.026 } 00:25:16.026 ] 00:25:16.026 }, 00:25:16.026 { 00:25:16.026 "subsystem": "iobuf", 00:25:16.026 "config": [ 00:25:16.026 { 00:25:16.026 "method": "iobuf_set_options", 00:25:16.026 "params": { 00:25:16.026 "small_pool_count": 8192, 00:25:16.026 "large_pool_count": 1024, 00:25:16.026 "small_bufsize": 8192, 00:25:16.026 "large_bufsize": 135168 00:25:16.026 } 00:25:16.026 } 00:25:16.026 ] 00:25:16.026 }, 00:25:16.026 { 00:25:16.026 "subsystem": "sock", 00:25:16.026 "config": [ 00:25:16.026 { 00:25:16.026 "method": "sock_set_default_impl", 00:25:16.026 "params": { 00:25:16.026 "impl_name": "uring" 00:25:16.026 } 00:25:16.026 }, 00:25:16.026 { 00:25:16.026 "method": "sock_impl_set_options", 00:25:16.026 "params": { 00:25:16.026 "impl_name": "ssl", 00:25:16.026 "recv_buf_size": 4096, 00:25:16.026 "send_buf_size": 4096, 00:25:16.026 "enable_recv_pipe": true, 00:25:16.026 "enable_quickack": false, 00:25:16.026 "enable_placement_id": 0, 00:25:16.026 "enable_zerocopy_send_server": true, 00:25:16.026 "enable_zerocopy_send_client": false, 00:25:16.026 "zerocopy_threshold": 0, 00:25:16.026 "tls_version": 0, 00:25:16.026 "enable_ktls": false 00:25:16.026 } 00:25:16.026 }, 00:25:16.026 { 00:25:16.026 "method": "sock_impl_set_options", 00:25:16.026 "params": { 00:25:16.026 "impl_name": "posix", 00:25:16.026 "recv_buf_size": 2097152, 00:25:16.026 "send_buf_size": 2097152, 00:25:16.026 "enable_recv_pipe": true, 00:25:16.026 "enable_quickack": false, 00:25:16.026 "enable_placement_id": 0, 00:25:16.026 "enable_zerocopy_send_server": true, 00:25:16.026 "enable_zerocopy_send_client": false, 00:25:16.026 "zerocopy_threshold": 0, 00:25:16.026 "tls_version": 0, 00:25:16.026 "enable_ktls": false 00:25:16.026 } 00:25:16.026 }, 00:25:16.026 { 00:25:16.026 "method": "sock_impl_set_options", 00:25:16.026 "params": { 00:25:16.026 "impl_name": "uring", 00:25:16.026 "recv_buf_size": 2097152, 00:25:16.026 "send_buf_size": 2097152, 00:25:16.026 "enable_recv_pipe": true, 00:25:16.026 "enable_quickack": false, 00:25:16.026 "enable_placement_id": 0, 00:25:16.026 "enable_zerocopy_send_server": false, 00:25:16.026 "enable_zerocopy_send_client": false, 00:25:16.026 "zerocopy_threshold": 0, 00:25:16.026 "tls_version": 0, 00:25:16.026 "enable_ktls": false 00:25:16.026 } 00:25:16.026 } 00:25:16.026 ] 00:25:16.026 }, 00:25:16.026 { 00:25:16.026 "subsystem": "vmd", 00:25:16.026 "config": [] 00:25:16.026 }, 00:25:16.026 { 00:25:16.026 "subsystem": "accel", 00:25:16.026 "config": [ 00:25:16.026 { 00:25:16.026 "method": "accel_set_options", 00:25:16.026 "params": { 00:25:16.026 "small_cache_size": 128, 00:25:16.026 "large_cache_size": 16, 00:25:16.026 "task_count": 2048, 00:25:16.026 "sequence_count": 2048, 00:25:16.026 "buf_count": 2048 00:25:16.026 } 00:25:16.026 } 00:25:16.026 ] 00:25:16.026 }, 00:25:16.026 { 00:25:16.026 "subsystem": "bdev", 00:25:16.026 "config": [ 00:25:16.026 { 00:25:16.026 "method": "bdev_set_options", 00:25:16.026 "params": { 00:25:16.026 "bdev_io_pool_size": 65535, 00:25:16.026 "bdev_io_cache_size": 256, 00:25:16.026 "bdev_auto_examine": true, 00:25:16.026 "iobuf_small_cache_size": 128, 00:25:16.026 "iobuf_large_cache_size": 16 00:25:16.026 } 00:25:16.026 }, 00:25:16.026 { 00:25:16.026 "method": "bdev_raid_set_options", 00:25:16.026 "params": { 00:25:16.026 "process_window_size_kb": 1024, 00:25:16.026 "process_max_bandwidth_mb_sec": 0 00:25:16.026 } 00:25:16.026 }, 00:25:16.026 { 00:25:16.026 "method": "bdev_iscsi_set_options", 00:25:16.026 "params": { 00:25:16.026 "timeout_sec": 30 00:25:16.026 } 00:25:16.026 }, 00:25:16.026 { 00:25:16.026 "method": "bdev_nvme_set_options", 00:25:16.026 "params": { 00:25:16.026 "action_on_timeout": "none", 00:25:16.026 "timeout_us": 0, 00:25:16.026 "timeout_admin_us": 0, 00:25:16.026 "keep_alive_timeout_ms": 10000, 00:25:16.026 "arbitration_burst": 0, 00:25:16.026 "low_priority_weight": 0, 00:25:16.026 "medium_priority_weight": 0, 00:25:16.026 "high_priority_weight": 0, 00:25:16.026 "nvme_adminq_poll_period_us": 10000, 00:25:16.026 "nvme_ioq_poll_period_us": 0, 00:25:16.026 "io_queue_requests": 512, 00:25:16.027 "delay_cmd_submit": true, 00:25:16.027 "transport_retry_count": 4, 00:25:16.027 "bdev_retry_count": 3, 00:25:16.027 "transport_ack_timeout": 0, 00:25:16.027 "ctrlr_loss_timeout_sec": 0, 00:25:16.027 "reconnect_delay_sec": 0, 00:25:16.027 "fast_io_fail_timeout_sec": 0, 00:25:16.027 "disable_auto_failback": false, 00:25:16.027 "generate_uuids": false, 00:25:16.027 "transport_tos": 0, 00:25:16.027 "nvme_error_stat": false, 00:25:16.027 "rdma_srq_size": 0, 00:25:16.027 "io_path_stat": false, 00:25:16.027 "allow_accel_sequence": false, 00:25:16.027 "rdma_max_cq_size": 0, 00:25:16.027 "rdma_cm_event_timeout_ms": 0, 00:25:16.027 "dhchap_digests": [ 00:25:16.027 "sha256", 00:25:16.027 "sha384", 00:25:16.027 "sha512" 00:25:16.027 ], 00:25:16.027 "dhchap_dhgroups": [ 00:25:16.027 "null", 00:25:16.027 "ffdhe2048", 00:25:16.027 "ffdhe3072", 00:25:16.027 "ffdhe4096", 00:25:16.027 "ffdhe6144", 00:25:16.027 "ffdhe8192" 00:25:16.027 ] 00:25:16.027 } 00:25:16.027 }, 00:25:16.027 { 00:25:16.027 "method": "bdev_nvme_attach_controller", 00:25:16.027 "params": { 00:25:16.027 "name": "nvme0", 00:25:16.027 "trtype": "TCP", 00:25:16.027 "adrfam": "IPv4", 00:25:16.027 "traddr": "127.0.0.1", 00:25:16.027 "trsvcid": "4420", 00:25:16.027 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:16.027 "prchk_reftag": false, 00:25:16.027 "prchk_guard": false, 00:25:16.027 "ctrlr_loss_timeout_sec": 0, 00:25:16.027 "reconnect_delay_sec": 0, 00:25:16.027 "fast_io_fail_timeout_sec": 0, 00:25:16.027 "psk": "key0", 00:25:16.027 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:16.027 "hdgst": false, 00:25:16.027 "ddgst": false 00:25:16.027 } 00:25:16.027 }, 00:25:16.027 { 00:25:16.027 "method": "bdev_nvme_set_hotplug", 00:25:16.027 "params": { 00:25:16.027 "period_us": 100000, 00:25:16.027 "enable": false 00:25:16.027 } 00:25:16.027 }, 00:25:16.027 { 00:25:16.027 "method": "bdev_wait_for_examine" 00:25:16.027 } 00:25:16.027 ] 00:25:16.027 }, 00:25:16.027 { 00:25:16.027 "subsystem": "nbd", 00:25:16.027 "config": [] 00:25:16.027 } 00:25:16.027 ] 00:25:16.027 }' 00:25:16.027 18:42:33 keyring_file -- keyring/file.sh@115 -- # killprocess 99766 00:25:16.027 18:42:33 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 99766 ']' 00:25:16.027 18:42:33 keyring_file -- common/autotest_common.sh@954 -- # kill -0 99766 00:25:16.027 18:42:33 keyring_file -- common/autotest_common.sh@955 -- # uname 00:25:16.027 18:42:33 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:16.027 18:42:33 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99766 00:25:16.286 killing process with pid 99766 00:25:16.286 Received shutdown signal, test time was about 1.000000 seconds 00:25:16.286 00:25:16.286 Latency(us) 00:25:16.287 [2024-12-08T18:42:34.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.287 [2024-12-08T18:42:34.217Z] =================================================================================================================== 00:25:16.287 [2024-12-08T18:42:34.217Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:16.287 18:42:33 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:16.287 18:42:33 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:16.287 18:42:33 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99766' 00:25:16.287 18:42:33 keyring_file -- common/autotest_common.sh@969 -- # kill 99766 00:25:16.287 18:42:33 keyring_file -- common/autotest_common.sh@974 -- # wait 99766 00:25:16.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:16.287 18:42:34 keyring_file -- keyring/file.sh@118 -- # bperfpid=100003 00:25:16.287 18:42:34 keyring_file -- keyring/file.sh@120 -- # waitforlisten 100003 /var/tmp/bperf.sock 00:25:16.287 18:42:34 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 100003 ']' 00:25:16.287 18:42:34 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:16.287 18:42:34 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:25:16.287 18:42:34 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:16.287 18:42:34 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:16.287 18:42:34 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:16.287 18:42:34 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:25:16.287 "subsystems": [ 00:25:16.287 { 00:25:16.287 "subsystem": "keyring", 00:25:16.287 "config": [ 00:25:16.287 { 00:25:16.287 "method": "keyring_file_add_key", 00:25:16.287 "params": { 00:25:16.287 "name": "key0", 00:25:16.287 "path": "/tmp/tmp.XaPwtvrQou" 00:25:16.287 } 00:25:16.287 }, 00:25:16.287 { 00:25:16.287 "method": "keyring_file_add_key", 00:25:16.287 "params": { 00:25:16.287 "name": "key1", 00:25:16.287 "path": "/tmp/tmp.YhN8b8E4MZ" 00:25:16.287 } 00:25:16.287 } 00:25:16.287 ] 00:25:16.287 }, 00:25:16.287 { 00:25:16.287 "subsystem": "iobuf", 00:25:16.287 "config": [ 00:25:16.287 { 00:25:16.287 "method": "iobuf_set_options", 00:25:16.287 "params": { 00:25:16.287 "small_pool_count": 8192, 00:25:16.287 "large_pool_count": 1024, 00:25:16.287 "small_bufsize": 8192, 00:25:16.287 "large_bufsize": 135168 00:25:16.287 } 00:25:16.287 } 00:25:16.287 ] 00:25:16.287 }, 00:25:16.287 { 00:25:16.287 "subsystem": "sock", 00:25:16.287 "config": [ 00:25:16.287 { 00:25:16.287 "method": "sock_set_default_impl", 00:25:16.287 "params": { 00:25:16.287 "impl_name": "uring" 00:25:16.287 } 00:25:16.287 }, 00:25:16.287 { 00:25:16.287 "method": "sock_impl_set_options", 00:25:16.287 "params": { 00:25:16.287 "impl_name": "ssl", 00:25:16.287 "recv_buf_size": 4096, 00:25:16.287 "send_buf_size": 4096, 00:25:16.287 "enable_recv_pipe": true, 00:25:16.287 "enable_quickack": false, 00:25:16.287 "enable_placement_id": 0, 00:25:16.287 "enable_zerocopy_send_server": true, 00:25:16.287 "enable_zerocopy_send_client": false, 00:25:16.287 "zerocopy_threshold": 0, 00:25:16.287 "tls_version": 0, 00:25:16.287 "enable_ktls": false 00:25:16.287 } 00:25:16.287 }, 00:25:16.287 { 00:25:16.287 "method": "sock_impl_set_options", 00:25:16.287 "params": { 00:25:16.287 "impl_name": "posix", 00:25:16.287 "recv_buf_size": 2097152, 00:25:16.287 "send_buf_size": 2097152, 00:25:16.287 "enable_recv_pipe": true, 00:25:16.287 "enable_quickack": false, 00:25:16.287 "enable_placement_id": 0, 00:25:16.287 "enable_zerocopy_send_server": true, 00:25:16.287 "enable_zerocopy_send_client": false, 00:25:16.287 "zerocopy_threshold": 0, 00:25:16.287 "tls_version": 0, 00:25:16.287 "enable_ktls": false 00:25:16.287 } 00:25:16.287 }, 00:25:16.287 { 00:25:16.287 "method": "sock_impl_set_options", 00:25:16.287 "params": { 00:25:16.287 "impl_name": "uring", 00:25:16.287 "recv_buf_size": 2097152, 00:25:16.287 "send_buf_size": 2097152, 00:25:16.287 "enable_recv_pipe": true, 00:25:16.287 "enable_quickack": false, 00:25:16.287 "enable_placement_id": 0, 00:25:16.287 "enable_zerocopy_send_server": false, 00:25:16.287 "enable_zerocopy_send_client": false, 00:25:16.287 "zerocopy_threshold": 0, 00:25:16.287 "tls_version": 0, 00:25:16.287 "enable_ktls": false 00:25:16.287 } 00:25:16.287 } 00:25:16.287 ] 00:25:16.287 }, 00:25:16.287 { 00:25:16.287 "subsystem": "vmd", 00:25:16.287 "config": [] 00:25:16.287 }, 00:25:16.287 { 00:25:16.287 "subsystem": "accel", 00:25:16.287 "config": [ 00:25:16.287 { 00:25:16.287 "method": "accel_set_options", 00:25:16.287 "params": { 00:25:16.287 "small_cache_size": 128, 00:25:16.287 "large_cache_size": 16, 00:25:16.287 "task_count": 2048, 00:25:16.287 "sequence_count": 2048, 00:25:16.287 "buf_count": 2048 00:25:16.287 } 00:25:16.287 } 00:25:16.287 ] 00:25:16.287 }, 00:25:16.287 { 00:25:16.287 "subsystem": "bdev", 00:25:16.287 "config": [ 00:25:16.287 { 00:25:16.287 "method": "bdev_set_options", 00:25:16.287 "params": { 00:25:16.287 "bdev_io_pool_size": 65535, 00:25:16.287 "bdev_io_cache_size": 256, 00:25:16.287 "bdev_auto_examine": true, 00:25:16.287 "iobuf_small_cache_size": 128, 00:25:16.287 "iobuf_large_cache_size": 16 00:25:16.287 } 00:25:16.287 }, 00:25:16.287 { 00:25:16.287 "method": "bdev_raid_set_options", 00:25:16.287 "params": { 00:25:16.287 "process_window_size_kb": 1024, 00:25:16.287 "process_max_bandwidth_mb_sec": 0 00:25:16.287 } 00:25:16.287 }, 00:25:16.287 { 00:25:16.287 "method": "bdev_iscsi_set_options", 00:25:16.287 "params": { 00:25:16.287 "timeout_sec": 30 00:25:16.287 } 00:25:16.287 }, 00:25:16.287 { 00:25:16.287 "method": "bdev_nvme_set_options", 00:25:16.287 "params": { 00:25:16.287 "action_on_timeout": "none", 00:25:16.287 "timeout_us": 0, 00:25:16.287 "timeout_admin_us": 0, 00:25:16.287 "keep_alive_timeout_ms": 10000, 00:25:16.287 "arbitration_burst": 0, 00:25:16.287 "low_priority_weight": 0, 00:25:16.287 "medium_priority_weight": 0, 00:25:16.287 "high_priority_weight": 0, 00:25:16.287 "nvme_adminq_poll_period_us": 10000, 00:25:16.287 "nvme_ioq_poll_period_us": 0, 00:25:16.287 "io_queue_requests": 512, 00:25:16.287 "delay_cmd_submit": true, 00:25:16.287 "transport_retry_count": 4, 00:25:16.287 "bdev_retry_count": 3, 00:25:16.287 "transport_ack_timeout": 0, 00:25:16.287 "ctrlr_loss_timeout_sec": 0, 00:25:16.287 "reconnect_delay_sec": 0, 00:25:16.287 "fast_io_fail_timeout_sec": 0, 00:25:16.287 "disable_auto_failback": false, 00:25:16.287 "generate_uuids": false, 00:25:16.287 "transport_tos": 0, 00:25:16.287 "nvme_error_stat": false, 00:25:16.287 "rdma_srq_size": 0, 00:25:16.287 "io_path_stat": false, 00:25:16.287 "allow_accel_sequence": false, 00:25:16.287 "rdma_max_cq_size": 0, 00:25:16.287 "rdma_cm_event_timeout_ms": 0, 00:25:16.287 "dhchap_digests": [ 00:25:16.287 "sha256", 00:25:16.287 "sha384", 00:25:16.287 "sha512" 00:25:16.287 ], 00:25:16.287 "dhchap_dhgroups": [ 00:25:16.287 "null", 00:25:16.287 "ffdhe2048", 00:25:16.287 "ffdhe3072", 00:25:16.287 "ffdhe4096", 00:25:16.287 "ffdhe6144", 00:25:16.287 "ffdhe8192" 00:25:16.287 ] 00:25:16.287 } 00:25:16.287 }, 00:25:16.287 { 00:25:16.287 "method": "bdev_nvme_attach_controller", 00:25:16.287 "params": { 00:25:16.287 "name": "nvme0", 00:25:16.287 "trtype": "TCP", 00:25:16.287 "adrfam": "IPv4", 00:25:16.287 "traddr": "127.0.0.1", 00:25:16.287 "trsvcid": "4420", 00:25:16.287 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:16.287 "prchk_reftag": false, 00:25:16.287 "prchk_guard": false, 00:25:16.287 "ctrlr_loss_timeout_sec": 0, 00:25:16.287 "reconnect_delay_sec": 0, 00:25:16.287 "fast_io_fail_timeout_sec": 0, 00:25:16.287 "psk": "key0", 00:25:16.287 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:16.287 "hdgst": false, 00:25:16.287 "ddgst": false 00:25:16.288 } 00:25:16.288 }, 00:25:16.288 { 00:25:16.288 "method": "bdev_nvme_set_hotplug", 00:25:16.288 "params": { 00:25:16.288 "period_us": 100000, 00:25:16.288 "enable": false 00:25:16.288 } 00:25:16.288 }, 00:25:16.288 { 00:25:16.288 "method": "bdev_wait_for_examine" 00:25:16.288 } 00:25:16.288 ] 00:25:16.288 }, 00:25:16.288 { 00:25:16.288 "subsystem": "nbd", 00:25:16.288 "config": [] 00:25:16.288 } 00:25:16.288 ] 00:25:16.288 }' 00:25:16.288 18:42:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:16.547 [2024-12-08 18:42:34.216518] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:16.547 [2024-12-08 18:42:34.216627] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100003 ] 00:25:16.547 [2024-12-08 18:42:34.348377] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.547 [2024-12-08 18:42:34.404559] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.806 [2024-12-08 18:42:34.536159] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:16.806 [2024-12-08 18:42:34.586464] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:17.380 18:42:35 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:17.380 18:42:35 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:25:17.380 18:42:35 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:25:17.380 18:42:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:17.380 18:42:35 keyring_file -- keyring/file.sh@121 -- # jq length 00:25:17.640 18:42:35 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:25:17.640 18:42:35 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:25:17.640 18:42:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:17.640 18:42:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:17.640 18:42:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:17.640 18:42:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:17.640 18:42:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:17.899 18:42:35 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:25:17.899 18:42:35 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:25:17.899 18:42:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:17.899 18:42:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:17.899 18:42:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:17.899 18:42:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:17.899 18:42:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:18.158 18:42:35 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:25:18.158 18:42:35 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:25:18.158 18:42:35 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:25:18.158 18:42:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:25:18.417 18:42:36 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:25:18.417 18:42:36 keyring_file -- keyring/file.sh@1 -- # cleanup 00:25:18.417 18:42:36 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.XaPwtvrQou /tmp/tmp.YhN8b8E4MZ 00:25:18.417 18:42:36 keyring_file -- keyring/file.sh@20 -- # killprocess 100003 00:25:18.417 18:42:36 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 100003 ']' 00:25:18.417 18:42:36 keyring_file -- common/autotest_common.sh@954 -- # kill -0 100003 00:25:18.417 18:42:36 keyring_file -- common/autotest_common.sh@955 -- # uname 00:25:18.417 18:42:36 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:18.417 18:42:36 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100003 00:25:18.417 killing process with pid 100003 00:25:18.417 Received shutdown signal, test time was about 1.000000 seconds 00:25:18.417 00:25:18.417 Latency(us) 00:25:18.417 [2024-12-08T18:42:36.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.417 [2024-12-08T18:42:36.347Z] =================================================================================================================== 00:25:18.417 [2024-12-08T18:42:36.347Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:18.417 18:42:36 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:18.417 18:42:36 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:18.417 18:42:36 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100003' 00:25:18.417 18:42:36 keyring_file -- common/autotest_common.sh@969 -- # kill 100003 00:25:18.417 18:42:36 keyring_file -- common/autotest_common.sh@974 -- # wait 100003 00:25:18.676 18:42:36 keyring_file -- keyring/file.sh@21 -- # killprocess 99755 00:25:18.676 18:42:36 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 99755 ']' 00:25:18.676 18:42:36 keyring_file -- common/autotest_common.sh@954 -- # kill -0 99755 00:25:18.676 18:42:36 keyring_file -- common/autotest_common.sh@955 -- # uname 00:25:18.676 18:42:36 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:18.676 18:42:36 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99755 00:25:18.676 killing process with pid 99755 00:25:18.676 18:42:36 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:18.676 18:42:36 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:18.676 18:42:36 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99755' 00:25:18.676 18:42:36 keyring_file -- common/autotest_common.sh@969 -- # kill 99755 00:25:18.676 18:42:36 keyring_file -- common/autotest_common.sh@974 -- # wait 99755 00:25:19.244 00:25:19.244 real 0m14.890s 00:25:19.244 user 0m37.309s 00:25:19.244 sys 0m3.011s 00:25:19.244 18:42:36 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:19.244 ************************************ 00:25:19.244 END TEST keyring_file 00:25:19.244 ************************************ 00:25:19.244 18:42:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:19.244 18:42:37 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:25:19.244 18:42:37 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:19.244 18:42:37 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:19.244 18:42:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:19.244 18:42:37 -- common/autotest_common.sh@10 -- # set +x 00:25:19.244 ************************************ 00:25:19.244 START TEST keyring_linux 00:25:19.244 ************************************ 00:25:19.244 18:42:37 keyring_linux -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:19.244 Joined session keyring: 642965775 00:25:19.244 * Looking for test storage... 00:25:19.244 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:25:19.244 18:42:37 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:19.244 18:42:37 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:25:19.244 18:42:37 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:19.503 18:42:37 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:19.503 18:42:37 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:19.503 18:42:37 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:19.503 18:42:37 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:19.503 18:42:37 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:25:19.503 18:42:37 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:25:19.503 18:42:37 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:25:19.503 18:42:37 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:25:19.503 18:42:37 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:25:19.503 18:42:37 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:25:19.503 18:42:37 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:25:19.503 18:42:37 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:19.503 18:42:37 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:25:19.503 18:42:37 keyring_linux -- scripts/common.sh@345 -- # : 1 00:25:19.503 18:42:37 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:19.503 18:42:37 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:19.503 18:42:37 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:25:19.503 18:42:37 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:25:19.503 18:42:37 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:19.503 18:42:37 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:25:19.503 18:42:37 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:25:19.503 18:42:37 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:25:19.503 18:42:37 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:25:19.503 18:42:37 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:19.503 18:42:37 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:25:19.503 18:42:37 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:25:19.503 18:42:37 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:19.503 18:42:37 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:19.503 18:42:37 keyring_linux -- scripts/common.sh@368 -- # return 0 00:25:19.503 18:42:37 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:19.503 18:42:37 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:19.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.503 --rc genhtml_branch_coverage=1 00:25:19.503 --rc genhtml_function_coverage=1 00:25:19.503 --rc genhtml_legend=1 00:25:19.503 --rc geninfo_all_blocks=1 00:25:19.503 --rc geninfo_unexecuted_blocks=1 00:25:19.503 00:25:19.503 ' 00:25:19.503 18:42:37 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:19.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.503 --rc genhtml_branch_coverage=1 00:25:19.503 --rc genhtml_function_coverage=1 00:25:19.503 --rc genhtml_legend=1 00:25:19.503 --rc geninfo_all_blocks=1 00:25:19.503 --rc geninfo_unexecuted_blocks=1 00:25:19.503 00:25:19.503 ' 00:25:19.503 18:42:37 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:19.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.503 --rc genhtml_branch_coverage=1 00:25:19.503 --rc genhtml_function_coverage=1 00:25:19.503 --rc genhtml_legend=1 00:25:19.503 --rc geninfo_all_blocks=1 00:25:19.503 --rc geninfo_unexecuted_blocks=1 00:25:19.503 00:25:19.503 ' 00:25:19.503 18:42:37 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:19.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.503 --rc genhtml_branch_coverage=1 00:25:19.503 --rc genhtml_function_coverage=1 00:25:19.503 --rc genhtml_legend=1 00:25:19.503 --rc geninfo_all_blocks=1 00:25:19.503 --rc geninfo_unexecuted_blocks=1 00:25:19.503 00:25:19.503 ' 00:25:19.503 18:42:37 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:25:19.503 18:42:37 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:19.503 18:42:37 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:25:19.503 18:42:37 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:19.503 18:42:37 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:19.503 18:42:37 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:19.503 18:42:37 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=f0ffb32d-08a4-43dc-a67b-9b60cdc76f8c 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:19.504 18:42:37 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:25:19.504 18:42:37 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.504 18:42:37 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.504 18:42:37 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.504 18:42:37 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.504 18:42:37 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.504 18:42:37 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.504 18:42:37 keyring_linux -- paths/export.sh@5 -- # export PATH 00:25:19.504 18:42:37 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:19.504 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:19.504 18:42:37 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:25:19.504 18:42:37 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:25:19.504 18:42:37 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:25:19.504 18:42:37 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:25:19.504 18:42:37 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:25:19.504 18:42:37 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:25:19.504 18:42:37 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:25:19.504 18:42:37 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:19.504 18:42:37 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:25:19.504 18:42:37 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:19.504 18:42:37 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:19.504 18:42:37 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:25:19.504 18:42:37 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@729 -- # python - 00:25:19.504 18:42:37 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:25:19.504 /tmp/:spdk-test:key0 00:25:19.504 18:42:37 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:25:19.504 18:42:37 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:25:19.504 18:42:37 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:19.504 18:42:37 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:25:19.504 18:42:37 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:25:19.504 18:42:37 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:19.504 18:42:37 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:25:19.504 18:42:37 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:25:19.504 18:42:37 keyring_linux -- nvmf/common.sh@729 -- # python - 00:25:19.504 18:42:37 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:25:19.504 /tmp/:spdk-test:key1 00:25:19.504 18:42:37 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:25:19.504 18:42:37 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:19.504 18:42:37 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=100135 00:25:19.504 18:42:37 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 100135 00:25:19.504 18:42:37 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 100135 ']' 00:25:19.504 18:42:37 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.504 18:42:37 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:19.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.504 18:42:37 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.504 18:42:37 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:19.504 18:42:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:19.504 [2024-12-08 18:42:37.386342] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:19.504 [2024-12-08 18:42:37.386437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100135 ] 00:25:19.824 [2024-12-08 18:42:37.515825] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.824 [2024-12-08 18:42:37.599620] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.824 [2024-12-08 18:42:37.684474] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:20.082 18:42:37 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:20.082 18:42:37 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:25:20.082 18:42:37 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:25:20.082 18:42:37 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.082 18:42:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:20.082 [2024-12-08 18:42:37.916538] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:20.082 null0 00:25:20.082 [2024-12-08 18:42:37.948532] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:20.082 [2024-12-08 18:42:37.948721] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:20.082 18:42:37 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.082 18:42:37 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:25:20.082 154818807 00:25:20.082 18:42:37 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:25:20.082 870843321 00:25:20.082 18:42:37 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:25:20.082 18:42:37 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=100141 00:25:20.082 18:42:37 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 100141 /var/tmp/bperf.sock 00:25:20.082 18:42:37 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 100141 ']' 00:25:20.082 18:42:37 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:20.082 18:42:37 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:20.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:20.082 18:42:37 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:20.082 18:42:37 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:20.082 18:42:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:20.341 [2024-12-08 18:42:38.017590] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:20.341 [2024-12-08 18:42:38.017676] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100141 ] 00:25:20.341 [2024-12-08 18:42:38.149739] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.341 [2024-12-08 18:42:38.208093] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.600 18:42:38 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:20.600 18:42:38 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:25:20.600 18:42:38 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:25:20.600 18:42:38 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:25:20.600 18:42:38 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:25:20.600 18:42:38 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:21.168 [2024-12-08 18:42:38.854791] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:21.168 18:42:38 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:21.168 18:42:38 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:21.426 [2024-12-08 18:42:39.156226] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:21.426 nvme0n1 00:25:21.426 18:42:39 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:25:21.426 18:42:39 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:25:21.426 18:42:39 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:21.426 18:42:39 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:21.426 18:42:39 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:21.426 18:42:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:21.685 18:42:39 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:25:21.685 18:42:39 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:21.685 18:42:39 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:25:21.685 18:42:39 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:25:21.685 18:42:39 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:21.685 18:42:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:21.685 18:42:39 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:25:21.945 18:42:39 keyring_linux -- keyring/linux.sh@25 -- # sn=154818807 00:25:21.945 18:42:39 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:25:21.945 18:42:39 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:21.945 18:42:39 keyring_linux -- keyring/linux.sh@26 -- # [[ 154818807 == \1\5\4\8\1\8\8\0\7 ]] 00:25:21.945 18:42:39 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 154818807 00:25:21.945 18:42:39 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:25:21.945 18:42:39 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:21.945 Running I/O for 1 seconds... 00:25:23.323 14485.00 IOPS, 56.58 MiB/s 00:25:23.323 Latency(us) 00:25:23.323 [2024-12-08T18:42:41.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.323 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:23.323 nvme0n1 : 1.01 14498.43 56.63 0.00 0.00 8788.44 6613.18 16205.27 00:25:23.323 [2024-12-08T18:42:41.253Z] =================================================================================================================== 00:25:23.323 [2024-12-08T18:42:41.254Z] Total : 14498.43 56.63 0.00 0.00 8788.44 6613.18 16205.27 00:25:23.324 { 00:25:23.324 "results": [ 00:25:23.324 { 00:25:23.324 "job": "nvme0n1", 00:25:23.324 "core_mask": "0x2", 00:25:23.324 "workload": "randread", 00:25:23.324 "status": "finished", 00:25:23.324 "queue_depth": 128, 00:25:23.324 "io_size": 4096, 00:25:23.324 "runtime": 1.007971, 00:25:23.324 "iops": 14498.432990631674, 00:25:23.324 "mibps": 56.63450386965498, 00:25:23.324 "io_failed": 0, 00:25:23.324 "io_timeout": 0, 00:25:23.324 "avg_latency_us": 8788.442257859835, 00:25:23.324 "min_latency_us": 6613.178181818182, 00:25:23.324 "max_latency_us": 16205.265454545455 00:25:23.324 } 00:25:23.324 ], 00:25:23.324 "core_count": 1 00:25:23.324 } 00:25:23.324 18:42:40 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:23.324 18:42:40 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:23.324 18:42:41 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:25:23.324 18:42:41 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:25:23.324 18:42:41 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:23.324 18:42:41 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:23.324 18:42:41 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:23.324 18:42:41 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:23.582 18:42:41 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:25:23.582 18:42:41 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:23.582 18:42:41 keyring_linux -- keyring/linux.sh@23 -- # return 00:25:23.582 18:42:41 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:23.582 18:42:41 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:25:23.582 18:42:41 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:23.582 18:42:41 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:25:23.582 18:42:41 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:23.582 18:42:41 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:25:23.582 18:42:41 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:23.582 18:42:41 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:23.582 18:42:41 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:23.841 [2024-12-08 18:42:41.641789] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:23.841 [2024-12-08 18:42:41.642133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d5f30 (107): Transport endpoint is not connected 00:25:23.841 [2024-12-08 18:42:41.643107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d5f30 (9): Bad file descriptor 00:25:23.841 [2024-12-08 18:42:41.644119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:23.841 [2024-12-08 18:42:41.644139] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:23.841 [2024-12-08 18:42:41.644164] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:25:23.841 [2024-12-08 18:42:41.644184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:23.841 request: 00:25:23.841 { 00:25:23.841 "name": "nvme0", 00:25:23.841 "trtype": "tcp", 00:25:23.841 "traddr": "127.0.0.1", 00:25:23.841 "adrfam": "ipv4", 00:25:23.841 "trsvcid": "4420", 00:25:23.841 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:23.841 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:23.841 "prchk_reftag": false, 00:25:23.841 "prchk_guard": false, 00:25:23.841 "hdgst": false, 00:25:23.841 "ddgst": false, 00:25:23.841 "psk": ":spdk-test:key1", 00:25:23.841 "allow_unrecognized_csi": false, 00:25:23.841 "method": "bdev_nvme_attach_controller", 00:25:23.841 "req_id": 1 00:25:23.841 } 00:25:23.841 Got JSON-RPC error response 00:25:23.841 response: 00:25:23.841 { 00:25:23.841 "code": -5, 00:25:23.841 "message": "Input/output error" 00:25:23.841 } 00:25:23.841 18:42:41 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:25:23.841 18:42:41 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:23.841 18:42:41 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:23.841 18:42:41 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:23.841 18:42:41 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:25:23.841 18:42:41 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:23.841 18:42:41 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:25:23.841 18:42:41 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:25:23.841 18:42:41 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:25:23.841 18:42:41 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:23.841 18:42:41 keyring_linux -- keyring/linux.sh@33 -- # sn=154818807 00:25:23.841 18:42:41 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 154818807 00:25:23.841 1 links removed 00:25:23.841 18:42:41 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:23.841 18:42:41 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:25:23.841 18:42:41 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:25:23.841 18:42:41 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:25:23.841 18:42:41 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:25:23.841 18:42:41 keyring_linux -- keyring/linux.sh@33 -- # sn=870843321 00:25:23.841 18:42:41 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 870843321 00:25:23.841 1 links removed 00:25:23.841 18:42:41 keyring_linux -- keyring/linux.sh@41 -- # killprocess 100141 00:25:23.841 18:42:41 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 100141 ']' 00:25:23.841 18:42:41 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 100141 00:25:23.841 18:42:41 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:25:23.841 18:42:41 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:23.841 18:42:41 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100141 00:25:23.841 18:42:41 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:23.841 18:42:41 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:23.841 killing process with pid 100141 00:25:23.841 Received shutdown signal, test time was about 1.000000 seconds 00:25:23.841 00:25:23.841 Latency(us) 00:25:23.841 [2024-12-08T18:42:41.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.841 [2024-12-08T18:42:41.771Z] =================================================================================================================== 00:25:23.841 [2024-12-08T18:42:41.771Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:23.841 18:42:41 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100141' 00:25:23.841 18:42:41 keyring_linux -- common/autotest_common.sh@969 -- # kill 100141 00:25:23.841 18:42:41 keyring_linux -- common/autotest_common.sh@974 -- # wait 100141 00:25:24.101 18:42:41 keyring_linux -- keyring/linux.sh@42 -- # killprocess 100135 00:25:24.101 18:42:41 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 100135 ']' 00:25:24.101 18:42:41 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 100135 00:25:24.101 18:42:41 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:25:24.101 18:42:41 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:24.101 18:42:41 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100135 00:25:24.101 18:42:41 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:24.101 18:42:41 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:24.101 killing process with pid 100135 00:25:24.101 18:42:41 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100135' 00:25:24.101 18:42:41 keyring_linux -- common/autotest_common.sh@969 -- # kill 100135 00:25:24.101 18:42:41 keyring_linux -- common/autotest_common.sh@974 -- # wait 100135 00:25:24.670 00:25:24.670 real 0m5.418s 00:25:24.670 user 0m10.129s 00:25:24.670 sys 0m1.697s 00:25:24.670 18:42:42 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:24.670 18:42:42 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:24.670 ************************************ 00:25:24.670 END TEST keyring_linux 00:25:24.670 ************************************ 00:25:24.670 18:42:42 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:25:24.670 18:42:42 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:25:24.670 18:42:42 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:25:24.670 18:42:42 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:25:24.670 18:42:42 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:25:24.670 18:42:42 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:25:24.670 18:42:42 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:25:24.671 18:42:42 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:25:24.671 18:42:42 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:25:24.671 18:42:42 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:25:24.671 18:42:42 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:25:24.671 18:42:42 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:25:24.671 18:42:42 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:25:24.671 18:42:42 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:25:24.671 18:42:42 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:25:24.671 18:42:42 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:25:24.671 18:42:42 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:25:24.671 18:42:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:24.671 18:42:42 -- common/autotest_common.sh@10 -- # set +x 00:25:24.671 18:42:42 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:25:24.671 18:42:42 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:25:24.671 18:42:42 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:25:24.671 18:42:42 -- common/autotest_common.sh@10 -- # set +x 00:25:26.578 INFO: APP EXITING 00:25:26.578 INFO: killing all VMs 00:25:26.578 INFO: killing vhost app 00:25:26.578 INFO: EXIT DONE 00:25:27.146 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:27.146 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:27.146 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:28.081 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:28.081 Cleaning 00:25:28.081 Removing: /var/run/dpdk/spdk0/config 00:25:28.081 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:28.081 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:28.081 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:28.082 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:28.082 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:28.082 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:28.082 Removing: /var/run/dpdk/spdk1/config 00:25:28.082 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:28.082 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:28.082 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:28.082 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:28.082 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:28.082 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:28.082 Removing: /var/run/dpdk/spdk2/config 00:25:28.082 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:28.082 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:28.082 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:28.082 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:28.082 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:28.082 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:28.082 Removing: /var/run/dpdk/spdk3/config 00:25:28.082 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:28.082 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:28.082 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:28.082 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:28.082 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:28.082 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:28.082 Removing: /var/run/dpdk/spdk4/config 00:25:28.082 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:28.082 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:28.082 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:28.082 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:28.082 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:28.082 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:28.082 Removing: /dev/shm/nvmf_trace.0 00:25:28.082 Removing: /dev/shm/spdk_tgt_trace.pid68822 00:25:28.082 Removing: /var/run/dpdk/spdk0 00:25:28.082 Removing: /var/run/dpdk/spdk1 00:25:28.082 Removing: /var/run/dpdk/spdk2 00:25:28.082 Removing: /var/run/dpdk/spdk3 00:25:28.082 Removing: /var/run/dpdk/spdk4 00:25:28.082 Removing: /var/run/dpdk/spdk_pid100003 00:25:28.082 Removing: /var/run/dpdk/spdk_pid100135 00:25:28.082 Removing: /var/run/dpdk/spdk_pid100141 00:25:28.082 Removing: /var/run/dpdk/spdk_pid68669 00:25:28.082 Removing: /var/run/dpdk/spdk_pid68822 00:25:28.082 Removing: /var/run/dpdk/spdk_pid69020 00:25:28.082 Removing: /var/run/dpdk/spdk_pid69101 00:25:28.082 Removing: /var/run/dpdk/spdk_pid69127 00:25:28.082 Removing: /var/run/dpdk/spdk_pid69236 00:25:28.082 Removing: /var/run/dpdk/spdk_pid69247 00:25:28.082 Removing: /var/run/dpdk/spdk_pid69381 00:25:28.082 Removing: /var/run/dpdk/spdk_pid69576 00:25:28.082 Removing: /var/run/dpdk/spdk_pid69725 00:25:28.082 Removing: /var/run/dpdk/spdk_pid69803 00:25:28.082 Removing: /var/run/dpdk/spdk_pid69879 00:25:28.082 Removing: /var/run/dpdk/spdk_pid69971 00:25:28.082 Removing: /var/run/dpdk/spdk_pid70055 00:25:28.082 Removing: /var/run/dpdk/spdk_pid70089 00:25:28.082 Removing: /var/run/dpdk/spdk_pid70119 00:25:28.082 Removing: /var/run/dpdk/spdk_pid70194 00:25:28.082 Removing: /var/run/dpdk/spdk_pid70299 00:25:28.082 Removing: /var/run/dpdk/spdk_pid70740 00:25:28.082 Removing: /var/run/dpdk/spdk_pid70784 00:25:28.082 Removing: /var/run/dpdk/spdk_pid70837 00:25:28.082 Removing: /var/run/dpdk/spdk_pid70853 00:25:28.082 Removing: /var/run/dpdk/spdk_pid70920 00:25:28.082 Removing: /var/run/dpdk/spdk_pid70929 00:25:28.082 Removing: /var/run/dpdk/spdk_pid70996 00:25:28.341 Removing: /var/run/dpdk/spdk_pid71004 00:25:28.341 Removing: /var/run/dpdk/spdk_pid71050 00:25:28.341 Removing: /var/run/dpdk/spdk_pid71060 00:25:28.341 Removing: /var/run/dpdk/spdk_pid71100 00:25:28.341 Removing: /var/run/dpdk/spdk_pid71120 00:25:28.341 Removing: /var/run/dpdk/spdk_pid71256 00:25:28.341 Removing: /var/run/dpdk/spdk_pid71286 00:25:28.341 Removing: /var/run/dpdk/spdk_pid71369 00:25:28.341 Removing: /var/run/dpdk/spdk_pid71701 00:25:28.341 Removing: /var/run/dpdk/spdk_pid71713 00:25:28.341 Removing: /var/run/dpdk/spdk_pid71744 00:25:28.341 Removing: /var/run/dpdk/spdk_pid71763 00:25:28.341 Removing: /var/run/dpdk/spdk_pid71778 00:25:28.341 Removing: /var/run/dpdk/spdk_pid71798 00:25:28.341 Removing: /var/run/dpdk/spdk_pid71812 00:25:28.341 Removing: /var/run/dpdk/spdk_pid71828 00:25:28.341 Removing: /var/run/dpdk/spdk_pid71852 00:25:28.341 Removing: /var/run/dpdk/spdk_pid71864 00:25:28.341 Removing: /var/run/dpdk/spdk_pid71881 00:25:28.341 Removing: /var/run/dpdk/spdk_pid71900 00:25:28.342 Removing: /var/run/dpdk/spdk_pid71913 00:25:28.342 Removing: /var/run/dpdk/spdk_pid71929 00:25:28.342 Removing: /var/run/dpdk/spdk_pid71948 00:25:28.342 Removing: /var/run/dpdk/spdk_pid71967 00:25:28.342 Removing: /var/run/dpdk/spdk_pid71981 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72001 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72015 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72036 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72061 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72080 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72109 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72176 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72210 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72216 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72250 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72254 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72267 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72304 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72323 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72346 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72361 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72365 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72380 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72384 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72399 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72403 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72418 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72442 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72473 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72483 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72511 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72521 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72528 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72569 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72580 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72612 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72614 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72627 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72629 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72642 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72644 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72657 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72659 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72741 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72783 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72896 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72929 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72974 00:25:28.342 Removing: /var/run/dpdk/spdk_pid72994 00:25:28.342 Removing: /var/run/dpdk/spdk_pid73011 00:25:28.342 Removing: /var/run/dpdk/spdk_pid73031 00:25:28.342 Removing: /var/run/dpdk/spdk_pid73062 00:25:28.342 Removing: /var/run/dpdk/spdk_pid73078 00:25:28.342 Removing: /var/run/dpdk/spdk_pid73156 00:25:28.342 Removing: /var/run/dpdk/spdk_pid73183 00:25:28.342 Removing: /var/run/dpdk/spdk_pid73227 00:25:28.342 Removing: /var/run/dpdk/spdk_pid73291 00:25:28.342 Removing: /var/run/dpdk/spdk_pid73336 00:25:28.342 Removing: /var/run/dpdk/spdk_pid73370 00:25:28.342 Removing: /var/run/dpdk/spdk_pid73475 00:25:28.342 Removing: /var/run/dpdk/spdk_pid73512 00:25:28.342 Removing: /var/run/dpdk/spdk_pid73550 00:25:28.342 Removing: /var/run/dpdk/spdk_pid73782 00:25:28.601 Removing: /var/run/dpdk/spdk_pid73874 00:25:28.601 Removing: /var/run/dpdk/spdk_pid73903 00:25:28.601 Removing: /var/run/dpdk/spdk_pid73932 00:25:28.601 Removing: /var/run/dpdk/spdk_pid73971 00:25:28.601 Removing: /var/run/dpdk/spdk_pid74005 00:25:28.601 Removing: /var/run/dpdk/spdk_pid74038 00:25:28.601 Removing: /var/run/dpdk/spdk_pid74075 00:25:28.601 Removing: /var/run/dpdk/spdk_pid74468 00:25:28.601 Removing: /var/run/dpdk/spdk_pid74506 00:25:28.601 Removing: /var/run/dpdk/spdk_pid74844 00:25:28.601 Removing: /var/run/dpdk/spdk_pid75304 00:25:28.601 Removing: /var/run/dpdk/spdk_pid75574 00:25:28.601 Removing: /var/run/dpdk/spdk_pid76435 00:25:28.601 Removing: /var/run/dpdk/spdk_pid77346 00:25:28.601 Removing: /var/run/dpdk/spdk_pid77463 00:25:28.601 Removing: /var/run/dpdk/spdk_pid77530 00:25:28.601 Removing: /var/run/dpdk/spdk_pid78941 00:25:28.601 Removing: /var/run/dpdk/spdk_pid79270 00:25:28.602 Removing: /var/run/dpdk/spdk_pid82897 00:25:28.602 Removing: /var/run/dpdk/spdk_pid83265 00:25:28.602 Removing: /var/run/dpdk/spdk_pid83377 00:25:28.602 Removing: /var/run/dpdk/spdk_pid83513 00:25:28.602 Removing: /var/run/dpdk/spdk_pid83534 00:25:28.602 Removing: /var/run/dpdk/spdk_pid83568 00:25:28.602 Removing: /var/run/dpdk/spdk_pid83589 00:25:28.602 Removing: /var/run/dpdk/spdk_pid83687 00:25:28.602 Removing: /var/run/dpdk/spdk_pid83828 00:25:28.602 Removing: /var/run/dpdk/spdk_pid83993 00:25:28.602 Removing: /var/run/dpdk/spdk_pid84067 00:25:28.602 Removing: /var/run/dpdk/spdk_pid84267 00:25:28.602 Removing: /var/run/dpdk/spdk_pid84338 00:25:28.602 Removing: /var/run/dpdk/spdk_pid84423 00:25:28.602 Removing: /var/run/dpdk/spdk_pid84782 00:25:28.602 Removing: /var/run/dpdk/spdk_pid85195 00:25:28.602 Removing: /var/run/dpdk/spdk_pid85196 00:25:28.602 Removing: /var/run/dpdk/spdk_pid85197 00:25:28.602 Removing: /var/run/dpdk/spdk_pid85465 00:25:28.602 Removing: /var/run/dpdk/spdk_pid85710 00:25:28.602 Removing: /var/run/dpdk/spdk_pid85712 00:25:28.602 Removing: /var/run/dpdk/spdk_pid88084 00:25:28.602 Removing: /var/run/dpdk/spdk_pid88086 00:25:28.602 Removing: /var/run/dpdk/spdk_pid88412 00:25:28.602 Removing: /var/run/dpdk/spdk_pid88437 00:25:28.602 Removing: /var/run/dpdk/spdk_pid88451 00:25:28.602 Removing: /var/run/dpdk/spdk_pid88477 00:25:28.602 Removing: /var/run/dpdk/spdk_pid88488 00:25:28.602 Removing: /var/run/dpdk/spdk_pid88571 00:25:28.602 Removing: /var/run/dpdk/spdk_pid88573 00:25:28.602 Removing: /var/run/dpdk/spdk_pid88681 00:25:28.602 Removing: /var/run/dpdk/spdk_pid88689 00:25:28.602 Removing: /var/run/dpdk/spdk_pid88797 00:25:28.602 Removing: /var/run/dpdk/spdk_pid88799 00:25:28.602 Removing: /var/run/dpdk/spdk_pid89250 00:25:28.602 Removing: /var/run/dpdk/spdk_pid89299 00:25:28.602 Removing: /var/run/dpdk/spdk_pid89402 00:25:28.602 Removing: /var/run/dpdk/spdk_pid89476 00:25:28.602 Removing: /var/run/dpdk/spdk_pid89834 00:25:28.602 Removing: /var/run/dpdk/spdk_pid90036 00:25:28.602 Removing: /var/run/dpdk/spdk_pid90468 00:25:28.602 Removing: /var/run/dpdk/spdk_pid91012 00:25:28.602 Removing: /var/run/dpdk/spdk_pid91864 00:25:28.602 Removing: /var/run/dpdk/spdk_pid92488 00:25:28.602 Removing: /var/run/dpdk/spdk_pid92496 00:25:28.602 Removing: /var/run/dpdk/spdk_pid94517 00:25:28.602 Removing: /var/run/dpdk/spdk_pid94571 00:25:28.602 Removing: /var/run/dpdk/spdk_pid94627 00:25:28.602 Removing: /var/run/dpdk/spdk_pid94681 00:25:28.602 Removing: /var/run/dpdk/spdk_pid94802 00:25:28.602 Removing: /var/run/dpdk/spdk_pid94849 00:25:28.602 Removing: /var/run/dpdk/spdk_pid94906 00:25:28.602 Removing: /var/run/dpdk/spdk_pid94953 00:25:28.602 Removing: /var/run/dpdk/spdk_pid95326 00:25:28.602 Removing: /var/run/dpdk/spdk_pid96549 00:25:28.602 Removing: /var/run/dpdk/spdk_pid96692 00:25:28.602 Removing: /var/run/dpdk/spdk_pid96927 00:25:28.602 Removing: /var/run/dpdk/spdk_pid97515 00:25:28.602 Removing: /var/run/dpdk/spdk_pid97675 00:25:28.602 Removing: /var/run/dpdk/spdk_pid97832 00:25:28.602 Removing: /var/run/dpdk/spdk_pid97929 00:25:28.602 Removing: /var/run/dpdk/spdk_pid98093 00:25:28.879 Removing: /var/run/dpdk/spdk_pid98205 00:25:28.879 Removing: /var/run/dpdk/spdk_pid98902 00:25:28.879 Removing: /var/run/dpdk/spdk_pid98936 00:25:28.879 Removing: /var/run/dpdk/spdk_pid98967 00:25:28.879 Removing: /var/run/dpdk/spdk_pid99221 00:25:28.879 Removing: /var/run/dpdk/spdk_pid99256 00:25:28.879 Removing: /var/run/dpdk/spdk_pid99287 00:25:28.879 Removing: /var/run/dpdk/spdk_pid99755 00:25:28.879 Removing: /var/run/dpdk/spdk_pid99766 00:25:28.879 Clean 00:25:28.879 18:42:46 -- common/autotest_common.sh@1451 -- # return 0 00:25:28.879 18:42:46 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:25:28.879 18:42:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:28.879 18:42:46 -- common/autotest_common.sh@10 -- # set +x 00:25:28.879 18:42:46 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:25:28.879 18:42:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:28.879 18:42:46 -- common/autotest_common.sh@10 -- # set +x 00:25:28.879 18:42:46 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:28.879 18:42:46 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:28.879 18:42:46 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:28.879 18:42:46 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:25:28.879 18:42:46 -- spdk/autotest.sh@394 -- # hostname 00:25:28.879 18:42:46 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:29.175 geninfo: WARNING: invalid characters removed from testname! 00:25:51.112 18:43:08 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:54.396 18:43:11 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:56.932 18:43:14 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:58.835 18:43:16 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:01.371 18:43:18 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:03.908 18:43:21 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:05.812 18:43:23 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:26:05.812 18:43:23 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:26:05.812 18:43:23 -- common/autotest_common.sh@1681 -- $ lcov --version 00:26:05.812 18:43:23 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:26:05.812 18:43:23 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:26:05.812 18:43:23 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:26:05.812 18:43:23 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:26:05.812 18:43:23 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:26:05.812 18:43:23 -- scripts/common.sh@336 -- $ IFS=.-: 00:26:05.812 18:43:23 -- scripts/common.sh@336 -- $ read -ra ver1 00:26:05.812 18:43:23 -- scripts/common.sh@337 -- $ IFS=.-: 00:26:05.812 18:43:23 -- scripts/common.sh@337 -- $ read -ra ver2 00:26:05.812 18:43:23 -- scripts/common.sh@338 -- $ local 'op=<' 00:26:05.812 18:43:23 -- scripts/common.sh@340 -- $ ver1_l=2 00:26:05.812 18:43:23 -- scripts/common.sh@341 -- $ ver2_l=1 00:26:05.812 18:43:23 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:26:05.812 18:43:23 -- scripts/common.sh@344 -- $ case "$op" in 00:26:05.812 18:43:23 -- scripts/common.sh@345 -- $ : 1 00:26:05.812 18:43:23 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:26:05.812 18:43:23 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:05.812 18:43:23 -- scripts/common.sh@365 -- $ decimal 1 00:26:05.812 18:43:23 -- scripts/common.sh@353 -- $ local d=1 00:26:05.812 18:43:23 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:26:05.812 18:43:23 -- scripts/common.sh@355 -- $ echo 1 00:26:05.812 18:43:23 -- scripts/common.sh@365 -- $ ver1[v]=1 00:26:05.812 18:43:23 -- scripts/common.sh@366 -- $ decimal 2 00:26:05.812 18:43:23 -- scripts/common.sh@353 -- $ local d=2 00:26:05.812 18:43:23 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:26:05.812 18:43:23 -- scripts/common.sh@355 -- $ echo 2 00:26:05.812 18:43:23 -- scripts/common.sh@366 -- $ ver2[v]=2 00:26:05.812 18:43:23 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:26:05.812 18:43:23 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:26:05.812 18:43:23 -- scripts/common.sh@368 -- $ return 0 00:26:05.812 18:43:23 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:05.812 18:43:23 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:26:05.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.812 --rc genhtml_branch_coverage=1 00:26:05.812 --rc genhtml_function_coverage=1 00:26:05.812 --rc genhtml_legend=1 00:26:05.812 --rc geninfo_all_blocks=1 00:26:05.812 --rc geninfo_unexecuted_blocks=1 00:26:05.812 00:26:05.812 ' 00:26:05.812 18:43:23 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:26:05.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.812 --rc genhtml_branch_coverage=1 00:26:05.812 --rc genhtml_function_coverage=1 00:26:05.812 --rc genhtml_legend=1 00:26:05.812 --rc geninfo_all_blocks=1 00:26:05.812 --rc geninfo_unexecuted_blocks=1 00:26:05.812 00:26:05.812 ' 00:26:05.812 18:43:23 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:26:05.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.813 --rc genhtml_branch_coverage=1 00:26:05.813 --rc genhtml_function_coverage=1 00:26:05.813 --rc genhtml_legend=1 00:26:05.813 --rc geninfo_all_blocks=1 00:26:05.813 --rc geninfo_unexecuted_blocks=1 00:26:05.813 00:26:05.813 ' 00:26:05.813 18:43:23 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:26:05.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.813 --rc genhtml_branch_coverage=1 00:26:05.813 --rc genhtml_function_coverage=1 00:26:05.813 --rc genhtml_legend=1 00:26:05.813 --rc geninfo_all_blocks=1 00:26:05.813 --rc geninfo_unexecuted_blocks=1 00:26:05.813 00:26:05.813 ' 00:26:05.813 18:43:23 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:05.813 18:43:23 -- scripts/common.sh@15 -- $ shopt -s extglob 00:26:05.813 18:43:23 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:26:05.813 18:43:23 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.813 18:43:23 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.813 18:43:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.813 18:43:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.813 18:43:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.813 18:43:23 -- paths/export.sh@5 -- $ export PATH 00:26:05.813 18:43:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.813 18:43:23 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:26:05.813 18:43:23 -- common/autobuild_common.sh@479 -- $ date +%s 00:26:05.813 18:43:23 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1733683403.XXXXXX 00:26:05.813 18:43:23 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1733683403.vxOayP 00:26:05.813 18:43:23 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:26:05.813 18:43:23 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:26:05.813 18:43:23 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:26:05.813 18:43:23 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:26:05.813 18:43:23 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:26:05.813 18:43:23 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:26:05.813 18:43:23 -- common/autobuild_common.sh@495 -- $ get_config_params 00:26:05.813 18:43:23 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:26:05.813 18:43:23 -- common/autotest_common.sh@10 -- $ set +x 00:26:05.813 18:43:23 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:26:05.813 18:43:23 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:26:05.813 18:43:23 -- pm/common@17 -- $ local monitor 00:26:05.813 18:43:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:05.813 18:43:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:05.813 18:43:23 -- pm/common@25 -- $ sleep 1 00:26:05.813 18:43:23 -- pm/common@21 -- $ date +%s 00:26:05.813 18:43:23 -- pm/common@21 -- $ date +%s 00:26:05.813 18:43:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1733683403 00:26:05.813 18:43:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1733683403 00:26:05.813 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1733683403_collect-vmstat.pm.log 00:26:05.813 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1733683403_collect-cpu-load.pm.log 00:26:07.190 18:43:24 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:26:07.190 18:43:24 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:26:07.190 18:43:24 -- spdk/autopackage.sh@14 -- $ timing_finish 00:26:07.190 18:43:24 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:26:07.190 18:43:24 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:26:07.190 18:43:24 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:07.190 18:43:24 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:26:07.190 18:43:24 -- pm/common@29 -- $ signal_monitor_resources TERM 00:26:07.190 18:43:24 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:26:07.190 18:43:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:07.190 18:43:24 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:26:07.190 18:43:24 -- pm/common@44 -- $ pid=101899 00:26:07.190 18:43:24 -- pm/common@50 -- $ kill -TERM 101899 00:26:07.190 18:43:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:07.190 18:43:24 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:26:07.190 18:43:24 -- pm/common@44 -- $ pid=101901 00:26:07.190 18:43:24 -- pm/common@50 -- $ kill -TERM 101901 00:26:07.190 + [[ -n 5940 ]] 00:26:07.190 + sudo kill 5940 00:26:07.198 [Pipeline] } 00:26:07.211 [Pipeline] // timeout 00:26:07.216 [Pipeline] } 00:26:07.227 [Pipeline] // stage 00:26:07.232 [Pipeline] } 00:26:07.243 [Pipeline] // catchError 00:26:07.251 [Pipeline] stage 00:26:07.253 [Pipeline] { (Stop VM) 00:26:07.263 [Pipeline] sh 00:26:07.545 + vagrant halt 00:26:10.111 ==> default: Halting domain... 00:26:16.697 [Pipeline] sh 00:26:16.979 + vagrant destroy -f 00:26:19.519 ==> default: Removing domain... 00:26:20.102 [Pipeline] sh 00:26:20.385 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:26:20.395 [Pipeline] } 00:26:20.412 [Pipeline] // stage 00:26:20.418 [Pipeline] } 00:26:20.432 [Pipeline] // dir 00:26:20.438 [Pipeline] } 00:26:20.454 [Pipeline] // wrap 00:26:20.461 [Pipeline] } 00:26:20.475 [Pipeline] // catchError 00:26:20.485 [Pipeline] stage 00:26:20.487 [Pipeline] { (Epilogue) 00:26:20.502 [Pipeline] sh 00:26:20.911 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:26.197 [Pipeline] catchError 00:26:26.199 [Pipeline] { 00:26:26.208 [Pipeline] sh 00:26:26.485 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:26.485 Artifacts sizes are good 00:26:26.493 [Pipeline] } 00:26:26.505 [Pipeline] // catchError 00:26:26.515 [Pipeline] archiveArtifacts 00:26:26.521 Archiving artifacts 00:26:26.645 [Pipeline] cleanWs 00:26:26.656 [WS-CLEANUP] Deleting project workspace... 00:26:26.656 [WS-CLEANUP] Deferred wipeout is used... 00:26:26.662 [WS-CLEANUP] done 00:26:26.664 [Pipeline] } 00:26:26.674 [Pipeline] // stage 00:26:26.678 [Pipeline] } 00:26:26.688 [Pipeline] // node 00:26:26.692 [Pipeline] End of Pipeline 00:26:26.728 Finished: SUCCESS